Re: [openstack-dev] mock 1.3 breaking all of Kilo in Sid (and other cases of this kind)

2015-08-26 Thread Robert Collins
On 27 August 2015 at 02:15, Ihar Hrachyshka ihrac...@redhat.com wrote:

 I really wish that nothing of this kind was even possible. Adding
 such an upper cap is like hiding the dust under the carpet: it
 doesn't remove the issue, it just hides it. We really have too much
 of these in OpenStack. Fixing broken tests was the correct thing to
 do, IMO.


 I think whoever is interested in raising a cap is free to send patches
 and get it up. I don't think those patches would be blocked.

So this is the situation.

In Kilo we have only one mechanism to defend against new releases that
are incompatible: version caps in project trees. Its possible but ugly
and prone to confusing errors, to express 'only use versions up to Y
to test this'. vs 'only use versions up to Y for this'[*]

In Liberty we have constraints, which apply globally across all
projects and give us a safety net, this will be available on
stable/liberty once its created.

So lifting the caps on kilo - won't work, please don't waste your own
or others time trying.
Lifing the caps on liberty - won't be needed, we're not putting them
in in the first place. Folk like packagers that need to know the
'tested compatible versions of dependencies' will be advised to look
in upper-constraints.txt.

At the M summit I intend to drive discussion about getting the
libraries back to a rolling release model, now that we're
de-aggregating 'The Release' we're culturally aligning with these tree
boundaries being contract boundaries, and working with any older
supported release across those boundaries. If we can do that then one
of the major drivers around caps will cease to be relevant (which was
making sure that stable/server-X works with stable/library-Y even
though there are newer releases of library-X up on PyPI. This stops
being important if the statement we make isn't 'here is a release as a
bundle' but is instead 'server-X uses library-Y and needed version
a.b.c at release time'. There are obviously CI and API evolution
factors at play: its not a simple or trivial discussion, but it is one
strongly implied by non-lockstep releases of the servers. (IMO at
least :)).

*: The pitfalls are as follows:
 - projects can't express different rules in install_requires vs
test-requirements.txt because of the requirements syncing checks
 - so the requirements have to be expressed in a non-synced file
 - even then, they can't be presented to pip twice or it will error,
so you have to ensure that none of the testing does 'pip install -r
requirements.txt -r test-requirements.txt -r thisnewfile ...' or it
will bail, which means 'pip install -r test-requirements.txt -r
thisnewfile' and let requirements.txt come in via the egg info
reflection in pbr; which means url deps (deprecated anyway) won't
work.
 - but devs can still be confused
 - and ordering issues can make surprising choices about dep versions
done this way due to pips resolver limitations

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mock 1.3 breaking all of Kilo in Sid (and other cases of this kind)

2015-08-26 Thread Robert Collins
On 27 August 2015 at 01:31, Thomas Goirand z...@debian.org wrote:
 On 08/25/2015 11:20 PM, Robert Collins wrote:

 So I can't upload PBR 1.3.0 to Sid. This has been dealt with
 because I am the maintainer of PBR, but really, it shouldn't have
 happen. How come for years, upgrading PBR always worked, and suddenly,
 when you start contributing to it, it breaks backward compat? I'm having
 a hard time to understand what's the need to break something which
 worked perfectly for so long. I'd appreciate more details.

 More of the ad hominens.

 Robert, I'm sorry you take it this way. Sure, I was kind of frustrated
 to see all the added work I have for dealing with issues with the newer
 mock. Though I was writing to you directly as we know each other for a
 long time. I didn't intend this as an attack.

Thank you for explaining your intent. Unfortunately it very much came
across as an attack. As a data point, a number of folk reached out to
me privately expressing support for me after your email: I wasn't the
only person to read it as one. I'm happy to work through why it came
across as one if you wish - privately or publically. I'm equally happy
to just move on - feeling better now that I know it was not intended
as one.

 As I say above, its not a PBR problem. Its badly expressed defensive
 dependencies in kilo's runtime requirements. Fix that, and kilo will
 be happy with newer pbr.

 That'd be too much work to patch all of kilo's requirements.txt
 everywhere, I'm afraid I prefer to leave things as they are.

It would be substantial work yes - thats why we haven't undertaken it
in the OpenStack git repos either.

 6/ Build something in Debian to deal with  conflicting APIs of Python
 packages - we can do it with C ABIs (and do, all the time), but
 there's no infrastructure for doing it with Python. If we had that
 then Debian Python maintainers could treat this as a graceful
 transition rather than an awkward big-bang.

 Even if we had such a thing (which would be great), we'd still have to
 deal with transitions the way it's done in C, which is hugely painful.

It is, though in principle at least it can be less disruptive. I'm not
sure we (wearing DD hat) have made the right tradeoffs in our approach
there. But thats very close to second guessing the folk driving the
transitions, so I'm going to avoid such guessing until I have the time
and inclination to directly help.

 One can't actually know that. Because one of the bugs in 1.0.1 is that
 many assertions appear to work even though they don't exist: tests are
 ilently broken with mock 1.0.1.

 FYI, I'm going for the option of not running the failed tests because of
 what you're explaining just above.

Yes, sounds fine.

 Now, the most annoying one is with testtools (ie: #796542). I'd
 appreciate having help on that one.

 Twisted's latest releases moved a private symbol that testtools
 unfortunately depends on.
 https://github.com/testing-cabal/testtools/pull/149 - and I just
 noticed now that Colin has added the test matrix we need, so we can
 merge this and get a release out this week.

 Hum... this is for the latest testtools, right? Could you help me with
 fixing testtools 0.9.39 in Sid, so that Kilo can continue to build
 there? Or is this too much work?

Liberty will require a newer testtools, but the patch to twisted
should be easily backportable - nothing else has changed near it (I
just did an inspection of all the patches over the last year) - so it
should trivially apply. [Except .travis.yml, which is irrelevant to
Debian].

However, I'm not aware of any API breaks in testtools 1.8.0 that would
affect kilo - we run the kilo tests in CI with latest testtools
release; you may need the related dependencies updated as well though
- and I'm not certain we've updated the minimum versions of deps in
the testtools requirements.txt - but as long as the test suite passes
I expect it will be fine.

Cheers,
Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] snapshot and cloning for NFS backend

2015-08-26 Thread Nikola Đipanov
On 07/28/2015 09:00 AM, Kekane, Abhishek wrote:
 Hi Devs,
 
  
 
 There is an NFS backend driver for cinder, which supports only limited
 volume handling features. Specifically, snapshot and cloning
 features are missing.
 
  
 
 Eric Harney has proposed a feature of NFS driver snapshot [1][2][3],
 which was approved on Dec 2014 but not implemented yet.
 
  
 
 [1] blueprint https://blueprints.launchpad.net/cinder/+spec/nfs-snapshots
 
 [2] cinder-specs https://review.openstack.org/#/c/133074/  - merged for
 Kilo but moved to Liberty
 
 [3] implementation https://review.openstack.org/#/c/147186/  - WIP
 
  
 
 As of now [4] nova patch is a blocker for this feature.
 
 I have tested this feature by applying [4] nova patch and it is working
 as per expectation.
 
  
 
 [4] https://review.openstack.org/#/c/149037/
 

so [4] is actually related to the following bug (it is linked on the
review):

https://bugs.launchpad.net/nova/+bug/1416132

The proposed patch is, as was discussed in some details on the patch -
not the right approach for several reasons.

I have added a comment on the bug [1] outlining what I think is the
right solution here, however - it is far from a trivial change.

Let me know if the comment on the bug makes sense and if I need to add
more information.

I will try to devote some time to fixing this, as I believe this is
causing us a lot more problems in the gate on an ongoing basis (see [2]
for example), but the discussion in the bug should be enough to get
anyone else who may want to pick it up on the right path to making progress!

Best,
N.

[1] https://bugs.launchpad.net/nova/+bug/1416132/comments/8
[2] https://bugs.launchpad.net/nova/+bug/1445021


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] dependencies problem on different release

2015-08-26 Thread Robert Collins
On 27 August 2015 at 02:00, Gareth academicgar...@gmail.com wrote:
 Hey guys,

 I have a question about dependencies. There is an example:

 On 2014.1, project A is released with its dependency in requirements.txt
 which contains:

 foo=1.5.0
 bar=2.0.0,2.2.0

 and half a year later, the requirements.txt changes to:

 foo=1.7.0
 bar=2.1.0,2.2.0

 It looks fine, but potential change would be upstream version of package foo
 and bar become 2.0.0 and 3.0.0 (major version upgrade means there are
 incompatible changes).

 For bar, there will be no problems, because 2.2.0 limit the version from
 major version changes. But with 2.0.0 foo, it will break the installation of
 2014.1 A, because current development can't predict every incompatible
 changes in the future.

Correct. But actually bar is a real problem for single-instance binary
distributors - like Debian family distributions - where only one
version of bar can be in the archive at once. For those distributions,
when bar 3 comes out, they cannot add it to the archive until a new
release of project A happens (or they break project A). They also
can't add anything to the archive that depends on bar  2.2.0, because
they can't add bar. So it leads to gridlock. We are now avoiding
adding and won't except in exceptional circumstances, such defensive
upper bounds to OpenStack's requirements. When we /know/ that the
thing is broken we may - if we can't get it fixed.

 A real example is to enable Rally for OpenStack Juno. Rally doesn't support
 old release officially but I could checkout its codes to the Juno release
 date which make both codes match. However even if I use the old
 requirements.txt to install dependencies, there must be many packages are
 installed as upstream versions and some of them breaks. An ugly way is to
 copy pip list from old Juno environment and install those properly. I hope
 there are better ways to do this work. Anyone has smart ideas?

As Boris says, use virtualenvs. They'll solve 90% of the pain.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] versioned objects changes

2015-08-26 Thread Ryan Rossiter
I've been working with Nova's versioned objects lately to help catch 
people when object changes are made. There is a lot of object-related 
tests in Nova for this, and a major one I can see helping this situation 
is this test [1]. Looking through the different versioned objects within 
Magnum, I don't see any objects that hold subobjects, so tests like [2] 
are not really necessary yet.


I have uploaded a review for bringing [1] from Nova into Magnum [3]. I 
think this will be a major step in the right direction towards keeping 
track of object changes that will help with rolling upgrades.


[1]: 
https://github.com/openstack/nova/blob/master/nova/tests/unit/objects/test_objects.py#L1262-L1286
[2]: 
https://github.com/openstack/nova/blob/master/nova/tests/unit/objects/test_objects.py#L1314

[3]: https://review.openstack.org/#/c/217342/

On 8/26/2015 3:47 AM, Grasza, Grzegorz wrote:


Hi,

I noticed that right now, when we make changes (adding/removing 
fields) in 
https://github.com/openstack/magnum/tree/master/magnum/objects , we 
don't change object versions.


The idea of objects is that each change in their fields should be 
versioned, documentation about the change should also be written in a 
comment inside the object and the obj_make_compatible method should be 
implemented or updated. See an example here:


https://github.com/openstack/nova/commit/ad6051bb5c2b62a0de6708cd2d7ac1e3cfd8f1d3#diff-7c6fefb09f0e1b446141d4c8f1ac5458L27

The question is, do you think magnum should support rolling upgrades 
from next release or maybe it's still too early?


If yes, I think core reviewers should start checking for these 
incompatible changes.


To clarify, rolling upgrades means support for running magnum services 
at different versions at the same time.


In Nova, there is an RPC call in the conductor to backport objects, 
which is called when older code gets an object it doesn’t understand. 
This patch does this in Magnum: https://review.openstack.org/#/c/184791/ .


I can report bugs and propose patches with version changes for this 
release, to get the effort started.


In Mitaka, when Grenade gets multi-node support, it can be used to add 
CI tests for rolling upgrades in Magnum.


/ Greg



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Thanks,

Ryan Rossiter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Help with stable/juno branches / releases

2015-08-26 Thread Doug Hellmann
Tony,

Thanks for digging into this!

I should be able to help, but right now we're ramping up for the L3
feature freeze and there are a lot of release-related activities going
on. Can this wait a few weeks for things to settle down again?

Doug

Excerpts from Tony Breeds's message of 2015-08-24 19:57:52 +1000:
 Hi All,
 Firstly I apologise for the rambling nature of this email.  There's a lot
 of context/data here and I'm not sure of the best way to present it.
 
 In [1] we discovered that stable/juno devstack is broken.  After a little 
 digging
 we opened [2]  This required creating a stable/juno branch for 
 python-swiftclient
 (done), raising the upper cap of python-swiftclient from =2.3.1 to 2.4.0
 (done) and releasing 2.3.2 (in progress).
 
 The change in upper limits for python-swiftclient has created several reviews
 (about 20).  Why am I telling you this?  Several of those reviews seem to be
 stuck on oslo related issues.
 
 * openstack/python-ceilometerclient :: 
 https://review.openstack.org/#/c/173126/
  It looks like it's getting 2.2.0 of oslo.i18n but it should capped at 
 1.3.1 in stable/juno
  ---
  Collecting oslo.utils1.5.0,=1.4.0 (from -r 
 /home/jenkins/workspace/gate-python-ceilometerclient-python27/requirements.txt
  (line 4))
Downloading 
 http://pypi.ORD.openstack.org/packages/py2.py3/o/oslo.utils/oslo.utils-1.4.0-py2.py3-none-any.whl
  (55kB)
  Collecting oslo.i18n=1.3.0 (from oslo.utils1.5.0,=1.4.0--r 
 /home/jenkins/workspace/gate-python-ceilometerclient-python27/requirements.txt
  (line 4))
Downloading 
 http://pypi.ORD.openstack.org/packages/py2.py3/o/oslo.i18n/oslo.i18n-2.2.0-py2.py3-none-any.whl
  ---
 I think this means we need an oslo.utils 1.4.1 release with the
 requirements capped from g-r.  There isn't a stable/juno branch for oslo.utils
 so would we need to make one based on 1.4.0?
 
  * openstack/pycadf :: https://review.openstack.org/#/c/206719/
 ---
  Collecting oslo.messaging1.5.0,=1.4.0 (from -r 
 /home/jenkins/workspace/gate-pycadf-python26/test-requirements.txt (line 12))
   Downloading 
 http://pypi.DFW.openstack.org/packages/py2/o/oslo.messaging/oslo.messaging-1.4.1-py2-none-any.whl
  (129kB)
  Collecting oslo.utils=0.2.0 (from oslo.messaging1.5.0,=1.4.0--r 
 /home/jenkins/workspace/gate-pycadf-python26/test-requirements.txt (line 12))
Downloading 
 http://pypi.DFW.openstack.org/packages/py2.py3/o/oslo.utils/oslo.utils-2.2.0-py2.py3-none-any.whl
  (58kB)
 ---
 Which I think needs a 1.4.2 release of oslo.messaging with requirements 
 capped from g-r
 
  * openstack/oslo.i18n :: https://review.openstack.org/#/c/206714
 This is failing with AttributeError: assert_calls  I think this is
 because we're installing mock 1.3.0 but mock should be capped at 1.0.1 in
 stable/juno
 ---
  Collecting oslotest1.4.0,=1.1.0 (from -r 
 /home/jenkins/workspace/gate-oslo.i18n-python26/test-requirements.txt (line 
 4))
   Downloading 
 http://pypi.region-b.geo-1.openstack.org/packages/py2.py3/o/oslotest/oslotest-1.3.0-py2.py3-none-any.whl
 Collecting mock=1.0 (from oslotest1.4.0,=1.1.0--r 
 /home/jenkins/workspace/gate-oslo.i18n-python26/test-requirements.txt (line 
 4))
   Downloading 
 http://pypi.region-b.geo-1.openstack.org/packages/2.7/m/mock/mock-1.3.0-py2.py3-none-any.whl
  (56kB)
 ---
 I think this means that we need oslotest 1.3.1 which the versions capped
 from g-r.  As with oslo.utils I think this needs a new branch?
 
 There are several other issues in other projects but I figured I had to start
 somewhere.
 
 To be clear I'm happy to propose patches to do these things (but clearly I
 can't make branches or cut releases).  I'm also very happy to be told I'm 
 doing
 this wrong and my conclusions are incorrect (bonus points if you help me find
 the right solution).
 
 Yours Tony.
 
 [1] http://lists.openstack.org/pipermail/openstack-dev/2015-August/072193.html
 [2] https://bugs.launchpad.net/python-swiftclient/+bug/1486576

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CLI and RHEL registration of overcloud nodes

2015-08-26 Thread Steven Hardy
On Wed, Aug 26, 2015 at 05:28:47PM +0200, Jan Provazník wrote:
 Hi,
 although rdomanager-oscplugin is not yet under TripleO it should be soon, so
 sending this to TripleO audience.
 
 Satellite registration from user's point of view is now done by passing
 couple of specific parameters when running openstack overcloud deploy
 command [1]. rdomanager-oscplugin checks presence of these params and adds
 additional env files which are then passed to heat and it also generates a
 temporary env file containing default_params required for rhe-registration
 template [2].
 
 This approach is not optimal because it means that:
 a) registration params have to be passed on each call of openstack
 overcloud deploy
 b) other CLI commands (pkg update, node deletion) should implement/reuse the
 same logic (support same parameters) to be consistent

I think the problem can be generalized to say that the CLI should never be
creating environment files internally, and ideally it should also not
contain any hard-coded hidden internal defaults or anything coupled to the
template implementation.

Everything should be provided via -e environment files, exactly like they
are for python-heatclient, and if we need some fall-back defaults, they
should be maintained in a defaults.yaml file somewhere, or, just use the
defaults in the actual heat templates.

I also think the use of special CLI switches should be discouraged or even
deprecated, e.g stuffl like --compute-flavor vs just passing
OvercloudComputeFlavor in an environment file.

 This is probably not necessary because registration params should be needed
 only when creating OC stack, no need to pass them later when running any
 update operation.

Note that this all becomes *much* easier with a properly working heat PATCH
update, which oscplugin is already trying to use:

https://github.com/rdo-management/python-rdomanager-oscplugin/blob/master/rdomanager_oscplugin/v1/overcloud_deploy.py#L436

The problem is PATCH updates were only partially implemented in heat for
Kilo, which I've been trying to fix recently:

https://bugs.launchpad.net/python-heatclient/+bug/1224828

The main part needed to avoid needing all the environment files every time
has now landed:

https://review.openstack.org/#/c/154619

We might consider if that's a valid kilo backport, but for upstream TripleO
this is already available :)

There are a couple still outstanding (to allow not passing the template on
update):

https://review.openstack.org/#/c/205754/
https://review.openstack.org/#/c/205755/

 As a short term solution I think it would be better to pass registration
 templates in the same way as other extra files (-e param) - although user
 will still have to pass additional parameter when using rhel-registration,
 it will be consistent with the way how e.g. network configuration is used
 and the -e mechanism for passing additional env files is already supported
 in other CLI commands. _create_registration_env method [2] would be
 updated to generateadd just user's credentials [3] env file - and this
 would be needed only when creating overcloud, no need to pass them when
 updating stack later.

+1, I think making everything consistent via -e parameters makes sense,
which will then become easier since you'll only have to pass those which
change on update after the heat stuff mentioned above is available to all
using the CLI.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Encapsulating logic and state in the client

2015-08-26 Thread Dougal Matthews
- Original Message -
 From: Jay Dobies jason.dob...@redhat.com
 To: openstack-dev@lists.openstack.org
 Sent: Tuesday, 25 August, 2015 2:31:02 PM
 Subject: Re: [openstack-dev] [TripleO] Encapsulating logic and state in the 
 client
 
  Thinking about this further, the interesting question to me is how much
  logic we aim to encapsulate behind an API. For example, one of the simpler
  CLI commands we have in RDO-Manager (which is moving upstream[1]) is to
  run introspection on all of the Ironic nodes. This involves a series of
  commands that need to be run in order and it can take upwards of 20
  minutes depending how many nodes you have. However, this does just
  communicate with Ironic (and ironic inspector) so is it worth hiding
  behind an API? I am inclined to say that it is so we can make the end
  result as easy to consume as possible but I think it might be difficult
  to draw the line in some cases.
 
  The question then rises about what this API would look like? Generally
  speaking I feel like it looks like a workflow API, it shouldn't offer
  many (or any?) unique features, rather it manages the process of
  performing a series of operations across multiple APIs. There have been
  attempts at doing this within OpenStack before in a more general case,
  I wonder what we can learn from those.
 
 This is where my head is too. The OpenStack on OpenStack thing means we
 get to leverage the existing tools and users can leverage their existing
 knowledge of the products.
 
 But what I think an API will provide is guidance on how to achieve that
 (the big argument there being if this should be done in an API or
 through documentation). It coaches new users and integrations on how to
 make all of the underlying pieces play together to accomplish certain
 things.
 
 To your question on that ironic call, I'm split on how I feel.
 
 On one hand, I really like the idea of the TripleO API being able to
 support an OpenStack deployment entirely on its own. You may want to go
 directly to some undercloud tools for certain edge cases, but for the
 most part you should be able to accomplish the goal of deploying
 OpenStack through the TripleO APIs.
 
 But that's not necessarily what TripleO wants to be. I've seen the
 sentiment of it only being tools for deploying OpenStack, in which case
 a single API isn't really what it's looking to do. I still think we need
 some sort of documentation to guide integrators instead of saying look
 at the REST API docs for these 5 projects, but that documentation is
 lighter weight than having pass through calls in a TripleO API.

I don't really feel like documentation is enough. If we want to have a 
consistent result between multiple clients (i.e. a CLI and UI that may 
not even be Python). We have already seen how two Python implementations 
of the same thing can vary (Tuskar-UI vs RDO-Manager CLI).

I see the API being like the RDO-Manager CLI is at the moment, something
you can use for generic workflow but then you can as you say go directly
to the services if needed. It would just make the generic workflow
consistent for consumers of that idea.

Having said that, I do still share your general concerns. So I am thinking
out loud a bit here.

  Unfortunately, as undesirable as these are, they're sometimes necessary
  in the world we currently live in. The only long-term solution to this
  is to put all of the logic and state behind a ReST API where it can be
  accessed from any language, and where any state can be stored
  appropriately, possibly in a database. In principle that could be
  accomplished either by creating a tripleo-specific ReST API, or by
  finding native OpenStack undercloud APIs to do everything we need. My
  guess is that we'll find a use for the former before everything is ready
  for the latter, but that's a discussion for another day. We're not there
  yet, but there are things we can do to keep our options open to make
  that transition in the future, and this is where tripleo-common comes in.
 
  I submit that anything that adds logic or state to the client should be
  implemented in the tripleo-common library instead of the client plugin.
  This offers a couple of advantages:
 
  - It provides a defined boundary between code that is CLI-specific and
  code that is shared between the CLI and GUI, which could become the
  model for a future ReST API once it has stabilised and we're ready to
  take that step.
  - It allows for an orderly transition when that happens - we can have a
  deprecation period during which the tripleo-common library is imported
  into both the client and the (future, hypothetical) ReST API.
 
  cheers,
  Zane.
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  [1]: 

[openstack-dev] [magnum] devstack/heat problem with master_wait_condition

2015-08-26 Thread Pitucha, Stanislaw Izaak
Hi all,

I’m trying to stand up magnum according to the quickstart instructions with 
devstack.

There’s one resource which times out and fails: master_wait_condition. The kube 
master (fedora) host seems to be created, I can login to it via ssh, other 
resources are created successfully.

 

What can I do from here? How do I debug this? I tried to look for the wc_notify 
itself to try manually, but I can’t even find that script.

 

Best Regards,

Stanisław Pitucha

 



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] [rally] [sahara] [heat] [congress] [tripleo] ceilometer in gate jobs

2015-08-26 Thread Chris Dent


[If any of this is wrong I hope someone from infra or qa will
correct me. Thanks. This feels a bit cumbersome so perhaps there is
a way to do it in a more automagic fashion[1].]

In the near future ceilometer will be removing itself from the core
of devstack and using a plugin instead. This is to allow more
independent control and flexibility.

These are the related reviews:

* remove from devstack: https://review.openstack.org/196383
* updated jenkins jobs: https://review.openstack.org/196446

If a project is using ceilometer in its gate jobs then before the
above can merge adjustments need to be made to make sure that the
ceilometer plugin is enabled. The usual change for this would be a
form of:

  DEVSTACK_LOCAL_CONFIG+=$'\n'enable_plugin ceilometer 
git://git.openstack.org/openstack/ceilometer

I'm not entirely clear on what we will need to do coordinate this,
but it is clear some coordination will need to be done such that
ceilometer remains in devstack until everything that is using
ceilometer in devstack is ready to use the plugin.

A grep through the jenkins jobs suggests that the projects in
$SUBJECT (rally, sahara, heat, congress, tripleo) will need some
changes.

How shall we proceed with this?

One option is for project team members[2] to make a stack of dependent
patches that are dependent on 196446 above (which itself is dependent
on 196383) so that it all happens in one fell swoop.

What are the other options?

Thanks for your input.

[1] That is, is it worth considering adding functionality to
devstack's sense of enabled such that if a service is enabled
devstack knows how to look for a plugin if it doesn't find local
support. With the destruction of the stackforge namespace we can
perhaps guess the git URL for plugins?

[2] Or me if that's better.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][murano] Glance V3 (Artifacts) usage in Murano

2015-08-26 Thread Flavio Percoco

On 24/08/15 18:37 +0300, Alexander Tivelkov wrote:

Hi folks,

In the upcoming L release Murano is going to use the Glance Artifact
Repository feature implemented as part of EXPERIMENTAL Glance V3 API.

The server-side support of this feature is already merged in glance's master
branch, while the client code is not: it was agreed that the v3's client will
stay in a dedicated feature branch (feature/artifacts) and will not be released
until the API is stable and final (a major API refactoring based on the
feedback from API WG is on the way and is likely to happen in M). So, there
will be no v3-aware releases of python-glanceclient on pypi until then. The
early adopters of V3 API are encouraged to build the tarballs out of the
feature branch on their own and use them, keeping in mind that the API is
EXPERIMENTAL so everything may be (and will be) changed.

However, Murano needs some way to consume Glance V3 right now, and  - as it has
a voting requirements job at the gate - it cannot just put a git branch
reference in its requirements.txt. It needs some kind of a release which would
be part of global requirements etc.

So, it was decided to temporary copy-paste the experimental part of glance
client into the murano client (i.e. to copy python-glanceclient/feature/
artifacts/v3 to python-muranoclient/master/artifacts) and release it as part of
several next releases of python-muranoclient. 

When the Glance V3 API is stable, we'll put its client to the master branch of
python-glanceclient and release it normally, then dropping the temporary copy
from python-muranoclient. 

Until then, all the changes to the experimental client should be done in the
feature/artifacts branch of glance client and copied (synced) to murano client.
Similar to oslo.incubator sync procedure, just without a shell script :)

I hope that the need to do this copy-pasting will not last for long and the v3
API becomes stable soon enough.


I was going to sugest having a separate library just for this code.
You'd work on this library until the API is stable and then you'd
merge it back into glanceclient as soon as the feature is stable
server side.

We could also have a way to load the library code as a plugin for
glanceclient - similar to the way openstack client works - but that
requires a spec for mitaka.

Anyway, the library could be called `python-glanceclient-artifacts` or
something along those lines.

Flavio

--
@flaper87
Flavio Percoco


pgpZVVCTURIVy.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][murano] Glance V3 (Artifacts) usage in Murano

2015-08-26 Thread Nikhil Komawar
We considered that option and have finally agreed on what Alex suggests
at the rel-mgrs office as this is the least painful path for most.


On 8/26/15 12:04 PM, Flavio Percoco wrote:
 On 24/08/15 18:37 +0300, Alexander Tivelkov wrote:
 Hi folks,

 In the upcoming L release Murano is going to use the Glance Artifact
 Repository feature implemented as part of EXPERIMENTAL Glance V3 API.

 The server-side support of this feature is already merged in glance's
 master
 branch, while the client code is not: it was agreed that the v3's
 client will
 stay in a dedicated feature branch (feature/artifacts) and will not
 be released
 until the API is stable and final (a major API refactoring based on the
 feedback from API WG is on the way and is likely to happen in M). So,
 there
 will be no v3-aware releases of python-glanceclient on pypi until
 then. The
 early adopters of V3 API are encouraged to build the tarballs out of the
 feature branch on their own and use them, keeping in mind that the
 API is
 EXPERIMENTAL so everything may be (and will be) changed.

 However, Murano needs some way to consume Glance V3 right now, and  -
 as it has
 a voting requirements job at the gate - it cannot just put a git branch
 reference in its requirements.txt. It needs some kind of a release
 which would
 be part of global requirements etc.

 So, it was decided to temporary copy-paste the experimental part of
 glance
 client into the murano client (i.e. to copy python-glanceclient/feature/
 artifacts/v3 to python-muranoclient/master/artifacts) and release it
 as part of
 several next releases of python-muranoclient. 

 When the Glance V3 API is stable, we'll put its client to the master
 branch of
 python-glanceclient and release it normally, then dropping the
 temporary copy
 from python-muranoclient. 

 Until then, all the changes to the experimental client should be done
 in the
 feature/artifacts branch of glance client and copied (synced) to
 murano client.
 Similar to oslo.incubator sync procedure, just without a shell script :)

 I hope that the need to do this copy-pasting will not last for long
 and the v3
 API becomes stable soon enough.

 I was going to sugest having a separate library just for this code.
 You'd work on this library until the API is stable and then you'd
 merge it back into glanceclient as soon as the feature is stable
 server side.

 We could also have a way to load the library code as a plugin for
 glanceclient - similar to the way openstack client works - but that
 requires a spec for mitaka.

 Anyway, the library could be called `python-glanceclient-artifacts` or
 something along those lines.

 Flavio



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] L7 - Tasks

2015-08-26 Thread Doug Wiegley
I think that if we’ve got someone identified to do the ref implementation, and 
that is code complete by 8/31, we can apply for a feature freeze exception. If 
we don’t have someone assigned to that task, it’ll slip.

doug

 On Aug 26, 2015, at 7:22 AM, Samuel Bercovici samu...@radware.com wrote:
 
 Hi,
 
 I think that Evgeny is trying to complete everything bedsides the reference 
 implementation (API, CLI, Tempest, etc.).
 Evgeny will join the Octavia IRC meeting so it could be a good opportunity to 
 get status and sync activities.
 As far as I know 8/31 is feature freeze and not code complete. Please correct 
 me, if I am wrong.
 
 -Sam.
 
 
 
 -Original Message-
 From: Eichberger, German [mailto:german.eichber...@hp.com] 
 Sent: Wednesday, August 26, 2015 2:46 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron][lbaas] L7 - Tasks
 
 Hi Evgeny,
 
 Of course we would love to have L7 in Liberty but that window is closing on 
 8/31. We usually monitor the progress (via Stephen) at the weekly Octavia 
 meeting. Stephen indicated that we won't get it before the L3 deadline and 
 with all the open items it might still be tight. I am wondering if you can 
 advise on that.
 
 Thanks,
 German
 
 From: Evgeny Fedoruk evge...@radware.commailto:evge...@radware.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: Tuesday, August 25, 2015 at 9:33 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [neutron][lbaas] L7 - Tasks
 
 Hello
 
 I would like to know if there is a plan for L7 extension work for Liberty 
 There is an extension patch-set here https://review.openstack.org/#/c/148232/
 We will also need to do a CLI work which I started to do and will commit 
 initial patch-set soon Reference implementation was started by Stephen here 
 https://review.openstack.org/#/c/204957/
 and tempest tests update should be done as well I do not know if it was 
 discussed at IRC meetings.
 Please share your thought about it.
 
 
 Regards,
 Evg
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][murano] Glance V3 (Artifacts) usage in Murano

2015-08-26 Thread Flavio Percoco

On 26/08/15 12:22 -0400, Nikhil Komawar wrote:

We considered that option and have finally agreed on what Alex suggests
at the rel-mgrs office as this is the least painful path for most.


It would have been nice to know it had already been discussed across
some glance members and other folks. I'm sure you all considered other
options too so I'm good.

It'd be nice to have a link to the discussion too.

Thanks,
Flavio




On 8/26/15 12:04 PM, Flavio Percoco wrote:

On 24/08/15 18:37 +0300, Alexander Tivelkov wrote:

Hi folks,

In the upcoming L release Murano is going to use the Glance Artifact
Repository feature implemented as part of EXPERIMENTAL Glance V3 API.

The server-side support of this feature is already merged in glance's
master
branch, while the client code is not: it was agreed that the v3's
client will
stay in a dedicated feature branch (feature/artifacts) and will not
be released
until the API is stable and final (a major API refactoring based on the
feedback from API WG is on the way and is likely to happen in M). So,
there
will be no v3-aware releases of python-glanceclient on pypi until
then. The
early adopters of V3 API are encouraged to build the tarballs out of the
feature branch on their own and use them, keeping in mind that the
API is
EXPERIMENTAL so everything may be (and will be) changed.

However, Murano needs some way to consume Glance V3 right now, and  -
as it has
a voting requirements job at the gate - it cannot just put a git branch
reference in its requirements.txt. It needs some kind of a release
which would
be part of global requirements etc.

So, it was decided to temporary copy-paste the experimental part of
glance
client into the murano client (i.e. to copy python-glanceclient/feature/
artifacts/v3 to python-muranoclient/master/artifacts) and release it
as part of
several next releases of python-muranoclient.

When the Glance V3 API is stable, we'll put its client to the master
branch of
python-glanceclient and release it normally, then dropping the
temporary copy
from python-muranoclient.

Until then, all the changes to the experimental client should be done
in the
feature/artifacts branch of glance client and copied (synced) to
murano client.
Similar to oslo.incubator sync procedure, just without a shell script :)

I hope that the need to do this copy-pasting will not last for long
and the v3
API becomes stable soon enough.


I was going to sugest having a separate library just for this code.
You'd work on this library until the API is stable and then you'd
merge it back into glanceclient as soon as the feature is stable
server side.

We could also have a way to load the library code as a plugin for
glanceclient - similar to the way openstack client works - but that
requires a spec for mitaka.

Anyway, the library could be called `python-glanceclient-artifacts` or
something along those lines.

Flavio



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpd4IbvyuGxn.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] L7 - Tasks

2015-08-26 Thread Evgeny Fedoruk
Hi,

As Sam mentioned, I will join Octavia meeting today, hope there will be time to 
discuss L7 tasks
L7 related patches in review now are:
Extension https://review.openstack.org/#/c/148232
CLI https://review.openstack.org/#/c/217276
Reference implementation  https://review.openstack.org/#/c/204957

Evg



-Original Message-
From: Samuel Bercovici 
Sent: Wednesday, August 26, 2015 4:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Evgeny Fedoruk
Subject: RE: [openstack-dev] [neutron][lbaas] L7 - Tasks

Hi,

I think that Evgeny is trying to complete everything bedsides the reference 
implementation (API, CLI, Tempest, etc.).
Evgeny will join the Octavia IRC meeting so it could be a good opportunity to 
get status and sync activities.
As far as I know 8/31 is feature freeze and not code complete. Please correct 
me, if I am wrong.

-Sam.



-Original Message-
From: Eichberger, German [mailto:german.eichber...@hp.com] 
Sent: Wednesday, August 26, 2015 2:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] L7 - Tasks

Hi Evgeny,

Of course we would love to have L7 in Liberty but that window is closing on 
8/31. We usually monitor the progress (via Stephen) at the weekly Octavia 
meeting. Stephen indicated that we won't get it before the L3 deadline and with 
all the open items it might still be tight. I am wondering if you can advise on 
that.

Thanks,
German

From: Evgeny Fedoruk evge...@radware.commailto:evge...@radware.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, August 25, 2015 at 9:33 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron][lbaas] L7 - Tasks

Hello

I would like to know if there is a plan for L7 extension work for Liberty There 
is an extension patch-set here https://review.openstack.org/#/c/148232/
We will also need to do a CLI work which I started to do and will commit 
initial patch-set soon Reference implementation was started by Stephen here 
https://review.openstack.org/#/c/204957/
and tempest tests update should be done as well I do not know if it was 
discussed at IRC meetings.
Please share your thought about it.


Regards,
Evg


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] First Liberty testing day

2015-08-26 Thread Flavio Percoco

On 26/08/15 19:30 +0200, Flavio Percoco wrote:

Greetings,

I'm happy to announce that the Zaqar team will hold a testing day this
Friday, August 28th. We'll be testing the API and the client library
and the hope is to find as many issues as possible that can be fixed
before our next release.

I'd like to extend the invitation to the entire community. Join us in
#openstack-zaqar... We have cookies.



And I forgot to mention:

The instructions for the testing day (installation, API's calls, etc)
will be here[0]. I'm finishing up that page offline and I promisse
it'll be up by the end of tomorrow. (last famous words).

[0] https://wiki.openstack.org/wiki/Zaqar/TestingDay/Liberty

Flavio

P.S: I suck at emails unless I'm ranting. Sorry about that!

--
@flaper87
Flavio Percoco


pgp3LNJBP4LKK.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] CI for reliable live-migration

2015-08-26 Thread Joe Gordon
On Wed, Aug 26, 2015 at 8:18 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 8/26/2015 3:21 AM, Timofei Durakov wrote:

 Hello,

 Here is the situation: nova has live-migration feature but doesn't have
 ci job to cover it by functional tests, only
 gate-tempest-dsvm-multinode-full(non-voting, btw), which covers
 block-migration only.
 The problem here is, that live-migration could be different, depending
 on how instance was booted(volume-backed/ephemeral), how environment is
 configured(is shared instance directory(NFS, for example), or RBD used
 to store ephemeral disk), or for example user don't have that and is
 going to use --block-migrate flag. To claim that we have reliable
 live-migration in nova, we should check it at least on envs with rbd or
 nfs as more popular than envs without shared storages at all.
 Here is the steps for that:

  1. make  gate-tempest-dsvm-multinode-full voting, as it looks OK for
 block-migration testing purposes;


When we are ready to make multinode voting we should remove the equivalent
single node job.



 If it's been stable for awhile then I'd be OK with making it voting on
 nova changes, I agree it's important to have at least *something* that
 gates on multi-node testing for nova since we seem to break this a few
 times per release.


Last I checked it isn't as stable is single node yet:
http://jogo.github.io/gate/multinode [0].  The data going into graphite is
a bit noisy so this may be a red herring, but at the very least it needs to
be investigated. When I was last looking into this there were at least two
known bugs:

https://bugs.launchpad.net/nova/+bug/1445569
https://bugs.launchpad.net/nova/+bug/1445569
https://bugs.launchpad.net/nova/+bug/1462305


[0]
http://graphite.openstack.org/graph/?from=-36hoursheight=500until=nowwidth=800bgcolor=fffgcolor=00yMax=100yMin=0target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.{SUCCESS,FAILURE})),%275hours%27),%20%27gate-tempest-dsvm-full%27),%27orange%27)target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.{SUCCESS,FAILURE})),%275hours%27),%20%27gate-tempest-dsvm-multinode-full%27),%27brown%27)title=Check%20Failure%20Rates%20(36%20hours)_t=0.48646087432280183
http://graphite.openstack.org/graph/?from=-36hoursheight=500until=nowwidth=800bgcolor=fffgcolor=00yMax=100yMin=0target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.%7BSUCCESS,FAILURE%7D)),%275hours%27),%20%27gate-tempest-dsvm-full%27),%27orange%27)target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.%7BSUCCESS,FAILURE%7D)),%275hours%27),%20%27gate-tempest-dsvm-multinode-full%27),%27brown%27)title=Check%20Failure%20Rates%20(36%20hours)_t=0.48646087432280183



  2. contribute to tempest to cover volume-backed instances live-migration;


 jogo has had a patch up for this for awhile:

 https://review.openstack.org/#/c/165233/

 Since it's not full time on openstack anymore I assume some help there in
 picking up the change would be appreciated.


yes please



  3. make another job with rbd for storing ephemerals, it also requires
 changing tempest config;


 We already have a voting ceph job for nova - can we turn that into a
 multi-node testing job and run live migration with shared storage using
 that?


  4. make job with nfs for ephemerals.


 Can't we use a multi-node ceph job (#3) for this?


 These steps should help us to improve current situation with
 live-migration.

 --
 Timofey.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --

 Thanks,

 Matt Riedemann


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] CI for reliable live-migration

2015-08-26 Thread Timofei Durakov
Update:

1. Job fails from time to time, I'm collecting statistics to understand
whether it is valid fails or some races, etc.
2. This sounds good:

 jogo has had a patch up for this for awhile:
 https://review.openstack.org/#/c/165233/

3. It's required more research:

 We already have a voting ceph job for nova - can we turn that into a
 multi-node testing job and run live migration with shared storage using
 that?

4.  I think no: there is a branch in execution flow that could be checked,
when we have shared instance path only.

 Can't we use a multi-node ceph job (#3) for this?


On Wed, Aug 26, 2015 at 6:18 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 8/26/2015 3:21 AM, Timofei Durakov wrote:

 Hello,

 Here is the situation: nova has live-migration feature but doesn't have
 ci job to cover it by functional tests, only
 gate-tempest-dsvm-multinode-full(non-voting, btw), which covers
 block-migration only.
 The problem here is, that live-migration could be different, depending
 on how instance was booted(volume-backed/ephemeral), how environment is
 configured(is shared instance directory(NFS, for example), or RBD used
 to store ephemeral disk), or for example user don't have that and is
 going to use --block-migrate flag. To claim that we have reliable
 live-migration in nova, we should check it at least on envs with rbd or
 nfs as more popular than envs without shared storages at all.
 Here is the steps for that:

  1. make  gate-tempest-dsvm-multinode-full voting, as it looks OK for
 block-migration testing purposes;


 If it's been stable for awhile then I'd be OK with making it voting on
 nova changes, I agree it's important to have at least *something* that
 gates on multi-node testing for nova since we seem to break this a few
 times per release.

  2. contribute to tempest to cover volume-backed instances live-migration;


 jogo has had a patch up for this for awhile:

 https://review.openstack.org/#/c/165233/

 Since it's not full time on openstack anymore I assume some help there in
 picking up the change would be appreciated.

  3. make another job with rbd for storing ephemerals, it also requires
 changing tempest config;


 We already have a voting ceph job for nova - can we turn that into a
 multi-node testing job and run live migration with shared storage using
 that?

  4. make job with nfs for ephemerals.


 Can't we use a multi-node ceph job (#3) for this?


 These steps should help us to improve current situation with
 live-migration.

 --
 Timofey.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --

 Thanks,

 Matt Riedemann


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] dependencies problem on different release

2015-08-26 Thread Boris Pavlovic
Gareth,


A real example is to enable Rally for OpenStack Juno. Rally doesn't support
 old release officially but I could checkout its codes to the Juno release date
 which make both codes match. However even if I use the old requirements.txt
 to install dependencies, there must be many packages are installed as
 upstream versions and some of them breaks. An ugly way is to copy pip list
 from old Juno environment and install those properly. I hope there are
 better ways to do this work. Anyone has smart ideas?


Install everything in virtualenv (or at least Rally)

Best regards,
Boris Pavlovic

On Wed, Aug 26, 2015 at 7:00 AM, Gareth academicgar...@gmail.com wrote:

 Hey guys,

 I have a question about dependencies. There is an example:

 On 2014.1, project A is released with its dependency in requirements.txt
 which contains:

 foo=1.5.0
 bar=2.0.0,2.2.0

 and half a year later, the requirements.txt changes to:

 foo=1.7.0
 bar=2.1.0,2.2.0

 It looks fine, but potential change would be upstream version of package
 foo and bar become 2.0.0 and 3.0.0 (major version upgrade means there are
 incompatible changes).

 For bar, there will be no problems, because 2.2.0 limit the version
 from major version changes. But with 2.0.0 foo, it will break the
 installation of 2014.1 A, because current development can't predict every
 incompatible changes in the future.

 A real example is to enable Rally for OpenStack Juno. Rally doesn't
 support old release officially but I could checkout its codes to the Juno
 release date which make both codes match. However even if I use the old
 requirements.txt to install dependencies, there must be many packages are
 installed as upstream versions and some of them breaks. An ugly way is to
 copy pip list from old Juno environment and install those properly. I hope
 there are better ways to do this work. Anyone has smart ideas?

 --
 Gareth

 *Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
 *OpenStack contributor, kun_huang@freenode*
 *My promise: if you find any spelling or grammar mistakes in my email from
 Mar 1 2013, notify me *
 *and I'll donate $1 or ¥1 to an open organization you specify.*

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][murano] Glance V3 (Artifacts) usage in Murano

2015-08-26 Thread Nikhil Komawar
Can't find the logs on eavesdrop atm. Discussed yesterday on
#openstack-relmgr-office around UTC evening.

On 8/26/15 12:32 PM, Flavio Percoco wrote:
 On 26/08/15 12:22 -0400, Nikhil Komawar wrote:
 We considered that option and have finally agreed on what Alex suggests
 at the rel-mgrs office as this is the least painful path for most.

 It would have been nice to know it had already been discussed across
 some glance members and other folks. I'm sure you all considered other
 options too so I'm good.

 It'd be nice to have a link to the discussion too.

 Thanks,
 Flavio



 On 8/26/15 12:04 PM, Flavio Percoco wrote:
 On 24/08/15 18:37 +0300, Alexander Tivelkov wrote:
 Hi folks,

 In the upcoming L release Murano is going to use the Glance Artifact
 Repository feature implemented as part of EXPERIMENTAL Glance V3 API.

 The server-side support of this feature is already merged in glance's
 master
 branch, while the client code is not: it was agreed that the v3's
 client will
 stay in a dedicated feature branch (feature/artifacts) and will not
 be released
 until the API is stable and final (a major API refactoring based on
 the
 feedback from API WG is on the way and is likely to happen in M). So,
 there
 will be no v3-aware releases of python-glanceclient on pypi until
 then. The
 early adopters of V3 API are encouraged to build the tarballs out
 of the
 feature branch on their own and use them, keeping in mind that the
 API is
 EXPERIMENTAL so everything may be (and will be) changed.

 However, Murano needs some way to consume Glance V3 right now, and  -
 as it has
 a voting requirements job at the gate - it cannot just put a git
 branch
 reference in its requirements.txt. It needs some kind of a release
 which would
 be part of global requirements etc.

 So, it was decided to temporary copy-paste the experimental part of
 glance
 client into the murano client (i.e. to copy
 python-glanceclient/feature/
 artifacts/v3 to python-muranoclient/master/artifacts) and release it
 as part of
 several next releases of python-muranoclient.

 When the Glance V3 API is stable, we'll put its client to the master
 branch of
 python-glanceclient and release it normally, then dropping the
 temporary copy
 from python-muranoclient.

 Until then, all the changes to the experimental client should be done
 in the
 feature/artifacts branch of glance client and copied (synced) to
 murano client.
 Similar to oslo.incubator sync procedure, just without a shell
 script :)

 I hope that the need to do this copy-pasting will not last for long
 and the v3
 API becomes stable soon enough.

 I was going to sugest having a separate library just for this code.
 You'd work on this library until the API is stable and then you'd
 merge it back into glanceclient as soon as the feature is stable
 server side.

 We could also have a way to load the library code as a plugin for
 glanceclient - similar to the way openstack client works - but that
 requires a spec for mitaka.

 Anyway, the library could be called `python-glanceclient-artifacts` or
 something along those lines.

 Flavio



 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 -- 

 Thanks,
 Nikhil


 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zaqar] First Liberty testing day

2015-08-26 Thread Flavio Percoco

Greetings,

I'm happy to announce that the Zaqar team will hold a testing day this
Friday, August 28th. We'll be testing the API and the client library
and the hope is to find as many issues as possible that can be fixed
before our next release.

I'd like to extend the invitation to the entire community. Join us in
#openstack-zaqar... We have cookies.

Flavio

--
@flaper87
Flavio Percoco


pgp80sIfwRtzH.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][horizon] Backward-incompatible changes to the Neutron API

2015-08-26 Thread James Dempsey
Greetings Heat/Horizon Devs,

There is some talk about possibly backward-incompatible changes to the
Neutron VPNaaS API and I'd like to better understand what that means for
Heat and Horizon.

It has been proposed to change Neutron VPNService objects such that they
reference a new resource type called an Endpoint Group instead of
simply a Subnet.

Does this mean that any version of Heat/Horizon would only be able to
support either the old or new Neutron API, or is there some way to allow
a version of Heat/Horizon to support both?


Thanks,
James

-- 
James Dempsey
Senior Cloud Engineer
Catalyst IT Limited
+64 4 803 2264
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Help with stable/juno branches / releases

2015-08-26 Thread Tony Breeds
On Wed, Aug 26, 2015 at 03:11:56PM -0400, Doug Hellmann wrote:
 Tony,
 
 Thanks for digging into this!

No problem.  It seemed like such a simple thing :/

 I should be able to help, but right now we're ramping up for the L3
 feature freeze and there are a lot of release-related activities going
 on. Can this wait a few weeks for things to settle down again?

Hi Doug,
Of course I'd rather not wait but I understand that I've uncovered a bit of
a mess that is stable/juno :(

Right now I need 3 releases for oslo packages and then releases for at least 5
other projects from stable/juno (and that after I get the various reviews
closed out) and it's quite possible that these releases will in turn generate
more.

I had to admit I'm questioning if it's worth it.  Not because I think it's too
hard but it is sunstantial effort to put into juno which is (in theory) going
to be EOL'd in 6 - 10 weeks.

I feel bad for asking that question as I've pulled in favors and people have
agreed to $things that they're not entirely comfortable with so we can fix
this.

Is it worth discussing this at next weeks cross-project meeting?

Yours Tony.


pgpuRYZna3qFC.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn][tempest] devstack: gate-tempest-dsvm-networking-ovn failures in Openstack CI

2015-08-26 Thread Russell Bryant
On 08/26/2015 04:25 PM, Amitabha Biswas wrote:
 With the recent commits it seems that the
 gate-tempest-dsvm-networking-ovn is succeeding more or less every time.
 The DBDeadlock issues still are seen on q-svc logs but are not frequent
 enough to cause ovsdb failures that were leading to the dsvm-networking
 failing before.

\o/

 Once in a while a test fails for e.g.
 tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_two_subnets
 that failed recently in Jenkins. But I am pretty sure it will succeed if
 the suite is re-run.

Have you looked to see if the same test is failing for the regular
neutron jobs?  Or does it seem to be OVN specific?

 Should the gate-tempest-dsvm-networking-ovn become voting at this time,
 and re-run/re-check if it fails in the Jenkins check?

That's a good question.  Maybe we should put up a test patch and recheck
it a bunch over the next few days to make sure it's as good as we think.
 If so, I'm all for making it voting asap.  Would you like to create a
test change for this?

If anything new comes up later that causes us problems, the nice thing
is that we can pretty quickly and easily disable tests by updating our
devstackgaterc file in the networking-ovn repo.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Re: New API for node create, specifying initial provision state

2015-08-26 Thread Julia Kreger
My apologies for not expressing my thoughts on this matter
sooner, however I've had to spend some time collecting my
thoughts.

To me, it seems like we do not trust our users.  Granted,
when I say users, I mean administrators who likely know more
about the disposition and capabilities of their fleet than
could ever be discovered or inferred via software.

Sure, we have other users, mainly in the form of consumers,
asking Ironic for hardware to be deployed, but the driver for
adoption is who feels the least amount of pain.

API versioning aside, I have to ask the community, what is
more important?

- An inflexible workflow that forces an administrator to
always have a green field, and to step through a workflow
that we've dictated, which may not apply to their operational
scenario, ultimately driving them to write custom code to
inject new nodes into the database directly, which will
surely break from time to time, causing them to hate Ironic
and look for a different solution.

- A happy administrator that has the capabilities to do their
job (and thus manage the baremetal node wherever it is in the
operator's lifecycle) in an efficient fashion, thus causing
them to fall in love with Ironic.

To me, it seems like happy administrators are the most
important thing for us to focus on, and while the workflow
nature is extremely important for greenfield deployments,
the ability to override the workflow seems absolutely vital
to an existing deployment, even if it is via a trust_me
super secret advanced handshake of doom that tells the API
that the user know best.

As a consumer of Ironic, an administrator of sorts, I don't
care about API versions as much as much as it has been argued.
I care about being able to achieve a task to meet my goals in
an efficient and repeatable fashion. I want it to be easier
for an administrator to do their job.

-Julia


On Tue, Aug 18, 2015 at 8:05 PM, Ruby Loo rlooya...@gmail.com wrote:




 On 17 August 2015 at 20:20, Robert Collins robe...@robertcollins.net
 wrote:

 On 11 August 2015 at 06:13, Ruby Loo rlooya...@gmail.com wrote:
  Hi, sorry for the delay. I vote no. I understand the rationale of
 trying to
  do things so that we don't break our users but that's what the
 versioning is
  meant for and more importantly -- I think adding the ENROLL state is
 fairly
  important wrt the lifecycle of a node. I don't particularly want to
 hide
  that and/or let folks opt out of it in the long term.
 
  From a reviewer point-of-view, my concern is me trying to remember
 all the
  possible permutations/states etc that are possible to make sure that
 new
  code doesn't break existing behavior. I haven't thought out whether
 adding
  this new API would make that worse or not, but then, I don't really
 want to
  have to think about it. So KISS as much as we can! :)

 I'm a little surprised by this, to be honest.

 Here's why: allowing the initial state to be chosen from
 ENROLL/AVAILABLE from the latest version of the API is precisely as
 complex as allowing two versions of the API {old, new} where old
 creates nodes in  AVAILABLE and new creates nodes in ENROLL. The only
 difference I can see is that eventually someday if {old} stops being
 supported, then and only then we can go through the code and clean
 things up.

 It seems to me that the costs to us of supporting graceful transitions
 for users here are:

 1) A new version NEWVER of the API that supports node state being one
 of {not supplied, AVAILABLE, ENROLL}, on creation, defaulting to
 AVAILABLE when not supplied.
 2) Supporting the initial state of AVAILABLE indefinitely rather than
 just until we *delete* version 1.10.
 3) CD deployments that had rolled forward to 1.11 will need to add the
 state parameter to their scripts to move forward to NEWVER.
 4) Don't default the client to the veresions between 1.10 and NEWVER
 versions at any point.

 That seems like a very small price to pay on our side, and the
 benefits for users are that they can opt into the new functionality
 when they are ready.

 -Rob


 After thinking about this some more, I'm not actually going to address
 Rob's points above. What I want to do is go back and discuss... what do
 people think about having an API that allows the initial provision state to
 be specified, for a node that is created in Ironic. I'm assuming that
 enroll state exists :)

 Earlier today on IRC, Devananda mentioned that there's a very strong case
 for allowing a node to be created in any of the stable states (enroll,
 manageable, available, active). Maybe he'll elaborate later on this. I
 know that there's a use case where there is a desire to import nodes (with
 instances on them) from another system into ironic, and have them be active
 right away. (They don't want the nodes to go from
 enroll-verifying-manageable-cleaning!!!-available!!!-active).

 1. What would the default provision state be, if it wasn't specified?
 A. 'available' to be backwards compatible with pre-v1.11
 or
 

[openstack-dev] subunit2html location on images changing

2015-08-26 Thread Matthew Treinish
Hi Everyone,

There is a pending change up for review that will move the location subunit2html
jenkins slave script:

https://review.openstack.org/212864/

It switches from a locally installed copy to using the version packaged in
os-testr which is installed in a system venv. This was done in an effort to
unify and package up some of the tooling which was locally copied around under
the covers but are generally useful utilities. If you have a local gate hook
which is manually calling the script realize that when this change merges and
takes effect on the next round of nodepool images things will start failing. To
fix it either update the location in your hook, or an even better solution would
be to just rely on devstack-gate to do the right thing for you. For example,
calling out to:

http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/functions.sh#n571

will do the right thing.

Thanks,

Matthew Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Bug triage/fix and doc update days in Liberty

2015-08-26 Thread michael mccune

hey Saharans,

with Doc fix day fast approaching i wanted to send out an email with the 
etherpad again,


https://etherpad.openstack.org/p/sahara-liberty-doc-update-day

it has been updated with all the docs and we are ready to roll starting 
on monday aug 31.


mike

On 08/24/2015 09:41 AM, Ethan Gafford wrote:

Hello all,

Bugfix days for Liberty's Sahara release have begun! Please sign up for bug 
fixes at:
https://etherpad.openstack.org/p/sahara-liberty-bug-fix-day

Cheers,
Ethan Gafford
Senior Software Engineer
OpenStack Sahara
Red Hat, Inc.

- Original Message -
From: Sergey Lukjanov slukja...@mirantis.com
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Sent: Thursday, August 13, 2015 11:12:33 AM
Subject: [openstack-dev] [sahara] Bug triage/fix and doc update days in Liberty

Hi folks,

on todays IRC meeting [0] we've agreed to have:

1) Bug triage day on Aug 17
We're looking for the volunteer to coordinate it ;) If someone wants to do it, 
please, reply to this email.
http://etherpad.openstack.org/p/sahara-liberty-bug-triage-day


2) Bug fix day on Aug 24
Ethan (egafford) volunteered to coordinate it.
http://etherpad.openstack.org/p/sahara-liberty-bug-fix-day


3) Doc update day on Aug 31
Mike (elmiko) volunteered to coordinate it.
http://etherpad.openstack.org/p/sahara-liberty-doc-update-day


Coordinators, please, add some initial notes to the ether pads and ensure that 
folks will be using them to sync efforts. For communication let's use 
#openstack-sahara IRC channel as always.

Thanks.

[0] 
http://eavesdrop.openstack.org/meetings/sahara/2015/sahara.2015-08-13-14.00.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] potential issues with WSME 0.8.0

2015-08-26 Thread Dmitry Tantsur

On 08/25/2015 07:38 PM, Chris Dent wrote:


WSME version 0.8.0 was released today with several fixes to error
handling and error messages. These fixes make WSME behave more in
the way it says it would like to behave (and should behave) with
regard to input validation and HTTP handling. You want these
changes.

Unfortunately we've discovered since the release that it causes test
failures in Ceilometer, Aodh and Ironic so it may also cause some
issues in other services.

The two main issues are:

* More detailed input validation can result in the body of a 4xx
   response having changed to reflect increased detail of the
   problem. If you have tests which check this response body, they
   may now break.

* Formerly, input validation would allow unused fields to pass through
   and be dropped. This is now, as a virtue of more strict processing
   throughout the validation handling, considered a client-side error.


Note that this is an API breaking change, which can potentially break 
random users of all projects using wsme. I think we should communicate 
this point a bit louder, and I also believe it should have warranted a 
major version bump.




There may also be situations where a 500 had been returned in the
past but now a more correct status code in the 4xx range is
returned.

Fixes for ceilometer and ironic are pending and may provide some
guidance on fixes other projects might need to do:

* Ironic: https://review.openstack.org/216802
* Ceilometer: https://review.openstack.org/#/c/208467/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][SR-IOV] deprecate agent_required option for SR-IOV mechanism driver

2015-08-26 Thread Moshe Levi
Hi all,

When SR-IOV introduce in Juno the SR-IOV Agent supported only link state change.
Some Intel cards don't support setting link state, so to
resolve it the  SR-IOV mechanism driver supports agent and agent less mode.
From Liberty the SR-IOV agent brings more functionality like
qos and port security so we want to make the agent mandatory.

I have already talked to pczesno from Intel and got his approval in the pci 
meeting [1]

For this to happen commit [2] and [3] need to be approved.



If anyone objects to this change now is the time to speak.





[1] - 
http://eavesdrop.openstack.org/meetings/pci_passthrough/2015/pci_passthrough.2015-06-23-13.09.log.txt

[2] - sriov: update port state even if ip link fails - 
https://review.openstack.org/#/c/195060/

[3] - SR-IOV: deprecate agent_required option - 
https://review.openstack.org/#/c/214324/



Thanks,

Moshe Levi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] potential issues with WSME 0.8.0

2015-08-26 Thread Renat Akhmerov
Chris,

What would be your recommendation for now? Just to cap wsme version and hold on 
with changes adjusting to WSME 0.8.0? Or you think most likely these changes in 
new WSME will remain on?

Thanks

Renat Akhmerov
@ Mirantis Inc.


 On 26 Aug 2015, at 14:27, Chris Dent chd...@redhat.com wrote:
 
 On Wed, 26 Aug 2015, Dmitry Tantsur wrote:
 
 Note that this is an API breaking change, which can potentially break random 
 users of all projects using wsme. I think we should communicate this point a 
 bit louder, and I also believe it should have warranted a major version bump.
 
 Yeah, Lucas and I weren't actually aware of the specific impact of that
 change until after it was released; part of the danger of being cores-on-
 demand rather than cores-by-desire[1].
 
 I'll speak with him and dhellman later this morning and figure out
 the best thing to do.
 
 And post the outcome back here.
 
 [1] After having done quite a few reviews and fixing a few bugs in WSME
 for a few months my advice to any one using it is to make a plan to
 stop. It is a very difficult design that is extremely hard to
 maintain. On top of that it was written from the standpoint of
 satisfying some use cases (mostly related to typed input and output
 handling and validation) by doing the HTTP handling that got the
 desired results _not_ correct HTTP. It is riddled with incorrect
 handling of headers and response codes that are very hard to fix
 without causing unplanned side effects.
 
 -- 
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] versioned objects changes

2015-08-26 Thread Grasza, Grzegorz
Hi,

I noticed that right now, when we make changes (adding/removing fields) in 
https://github.com/openstack/magnum/tree/master/magnum/objects , we don't 
change object versions.

The idea of objects is that each change in their fields should be versioned, 
documentation about the change should also be written in a comment inside the 
object and the obj_make_compatible method should be implemented or updated. See 
an example here:
https://github.com/openstack/nova/commit/ad6051bb5c2b62a0de6708cd2d7ac1e3cfd8f1d3#diff-7c6fefb09f0e1b446141d4c8f1ac5458L27

The question is, do you think magnum should support rolling upgrades from next 
release or maybe it's still too early?

If yes, I think core reviewers should start checking for these incompatible 
changes.

To clarify, rolling upgrades means support for running magnum services at 
different versions at the same time.
In Nova, there is an RPC call in the conductor to backport objects, which is 
called when older code gets an object it doesn't understand. This patch does 
this in Magnum: https://review.openstack.org/#/c/184791/ .

I can report bugs and propose patches with version changes for this release, to 
get the effort started.

In Mitaka, when Grenade gets multi-node support, it can be used to add CI tests 
for rolling upgrades in Magnum.


/ Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] potential issues with WSME 0.8.0

2015-08-26 Thread Chris Dent

On Wed, 26 Aug 2015, Dmitry Tantsur wrote:

Note that this is an API breaking change, which can potentially break random 
users of all projects using wsme. I think we should communicate this point a 
bit louder, and I also believe it should have warranted a major version bump.


Yeah, Lucas and I weren't actually aware of the specific impact of that
change until after it was released; part of the danger of being cores-on-
demand rather than cores-by-desire[1].

I'll speak with him and dhellman later this morning and figure out
the best thing to do.

And post the outcome back here.

[1] After having done quite a few reviews and fixing a few bugs in WSME
for a few months my advice to any one using it is to make a plan to
stop. It is a very difficult design that is extremely hard to
maintain. On top of that it was written from the standpoint of
satisfying some use cases (mostly related to typed input and output
handling and validation) by doing the HTTP handling that got the
desired results _not_ correct HTTP. It is riddled with incorrect
handling of headers and response codes that are very hard to fix
without causing unplanned side effects.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api][keystone][openstackclient] Standards for object name attributes and filtering

2015-08-26 Thread Henry Nash
Hi

With keystone, we recently came across an issue in terms of the assumptions 
that the openstack client is making about the entities it can show - namely 
that is assumes all entries have a ‘name’ attribute (which is how the 
openstack show command works). Turns out, that not all keystone entities have 
such an attribute (e.g. IDPs for federation) - often the ID is really the name. 
Is there already agreement across our APIs that all first class entities should 
have a ‘name’ attribute?  If we do, then we need to change keystone, if not, 
then we need to change openstack client to not make this assumption (and 
perhaps allow some kind of per-entity definition of which attribute should be 
used for ‘show’).

A follow on (and somewhat related) question to this, is whether we have agreed 
standards for what should happen if some provides an unrecognized filter to a 
list entities API request at the http level (this is related since this is also 
the hole osc fell into with keystone since, again, ‘name’ is not a recognized 
filter attribute). Currently keystone ignores filters it doesn’t understand (so 
if that was your only filter, you would get back all the entities). The 
alternative approach would of course be to return no entities if the filter is 
on an attribute we don’t recognize (or even issue a validation or bad request 
exception).  Again, the question is whether we have agreement across the 
projects for how such unrecognized filtering should be handled?

Thanks

Henry
Keystone Core

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Branching strategy vs feature freeze

2015-08-26 Thread Thierry Carrez
Dmitry Borodaenko wrote:
 TL;DR was actually hidden in the middle of the email, here's an even
 shorter version:
 
 0) we're suffering from closing master for feature work for too long
 
 1) continuously rebased future branch is most likely a no-go
 
 2) short FF (SCF and stable branch after 2 weeks) is an option for 8.0
 
 3) short FF with stable in a separate internal gerrit was also proposed
 
 4) merits and cost of enabling CI setup for private forks should be  
 carefully considered independently from other options

Should come as no surprise that I encourage you to follow (2),
especially as we work to further align Fuel with OpenStack ways so that
it can be added as an official project team.

Note that the two weeks is not really a hard requirement (could be
more, or less). In this model you need to come up with a release
candidate, which is where we create the release branch, which becomes a
stable branch at the end of the cycle. It usually takes 2 to 4 weeks for
OpenStack projects to get to that stage, but you could get there in 2
days or 5 weeks and it would still work (the key is to publish at least
one release candidate before the end of the cycle).

It's a balance between the pain of backporting fixes and the pain of
freezing master. At one point the flow of fixes slows down enough and/or
the pressure to unfreeze master becomes too strong: that's when you
should create the release branch.

Hope this helps,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ANN] OpenStack Kilo on Ubuntu fully automated with Ansible! Ready for NFV L2 Bridges via Heat!

2015-08-26 Thread Thierry Carrez
Martinx - ジェームズ wrote:
  I'm proud to announce an Ansible Playbook to deploy OpenStack on Ubuntu!
  Check it out!
  * https://github.com/sandvine/os-ansible-deployment-lite

How does it compare with the playbooks developed as an OpenStack project
by the OpenStackAnsible team[1] ?

Any benefit, difference ? Anything you could contribute back there ? Any
value in merging the two efforts ?

[1] http://governance.openstack.org/reference/projects/openstackansible.html

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] potential issues with WSME 0.8.0

2015-08-26 Thread Renat Akhmerov

 On 26 Aug 2015, at 13:40, Dmitry Tantsur dtant...@redhat.com wrote:
 
 On 08/25/2015 07:38 PM, Chris Dent wrote:
 
 WSME version 0.8.0 was released today with several fixes to error
 handling and error messages. These fixes make WSME behave more in
 the way it says it would like to behave (and should behave) with
 regard to input validation and HTTP handling. You want these
 changes.
 
 Unfortunately we've discovered since the release that it causes test
 failures in Ceilometer, Aodh and Ironic so it may also cause some
 issues in other services.
 
 The two main issues are:
 
 * More detailed input validation can result in the body of a 4xx
   response having changed to reflect increased detail of the
   problem. If you have tests which check this response body, they
   may now break.
 
 * Formerly, input validation would allow unused fields to pass through
   and be dropped. This is now, as a virtue of more strict processing
   throughout the validation handling, considered a client-side error.
 
 Note that this is an API breaking change, which can potentially break random 
 users of all projects using wsme. I think we should communicate this point a 
 bit louder, and I also believe it should have warranted a major version bump.


Agree with that. But thanks for letting us know. We’ll have to deal with that.


Renat Akhmerov
@ Mirantis Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum-ui] Status of Magnum UI

2015-08-26 Thread Jay Lau
Hi,

I see that we already have a magnum ui team, just wondering what is the
current status of this project? I'm planning a PoC and want to see if there
are some current magnum ui work that I can leverage.

-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] potential issues with WSME 0.8.0

2015-08-26 Thread Lucas Alvares Gomes
Hi,


 +1, yeah I kinda agree with the major version bump. But also it's
 important to note that Ironic which was affected by that was relying
 on be able to POST nonexistent fields to create resources and WSME
 would just ignore those on versions = 0.8.0. That's a legitimate bug
 that have been fixed in WSME (and projects shouldn't have relied on
 that in the first place).


Bah, sorry lemme correct myself here: s/agree/not agree.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] now using a grenade plugin

2015-08-26 Thread Chris Dent


Since it provides an opportunity to do some interesting things I
thought I should announce that ceilometer has left the core of
grenade and is now running its own 'gate-grenade-dsvm-ceilometer'
job that uses a grenade plugin hosted in the ceilometer repo.

At the moment the plugin does the bare minimum of checks to confirm that
the upgrade was successful (in devstack/upgrade/upgrade.sh).

Now that the plugin is in repo we have a lot of flexibility to do
whatever we want and we should.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] incubator move to private modules

2015-08-26 Thread Flavio Percoco

On 25/08/15 06:01 -0400, Davanum Srinivas wrote:

Morgan,

Bit more radical :) I am inclined to just yank all code from oslo-incubator and
let the projects modify/move what they have left into their own package/module
structure (and change the contracts however they see fit).


Glad this conversation is happening, I've started to think about this
as well. I think we're at a point were we could just let projects move
from were they are.

However, I'd like this to be a bit more organized. For instance, if we
dismiss oslo-incubator and let projects move forward on their own,
it'd be better to have all the `openstack/common/` packages renamed so
that it'll create less confusion to newcomers. At the very least, as
Morgan mentioned, these packages could be prefixed with an `_` and
become 'private' and 'owned' by the project.

We still need a 'deprecation' process for the code in the
oslo-incubator repository and we would still have to accept fixes for
previous releases.

Thoughts?
Flavio



-- Dims

On Tue, Aug 25, 2015 at 1:48 AM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:

   Over time oslo incubator has become less important as most things are
   simply becoming libraries from the get-go. However, there is still code in
   incubator and particularly Keystone client has seen an issue where the
   incubator code is considered a public api by consuming projects.

   I would like to start the conversation of moving all incubator modules to
   be prefixed by _ indicating they are not meant for public consumption. I
   expect that if there is not a large uproar here on the mailing list, that I
   will propose a spec to oslo shortly to make this change possible.

   What I am looking for before the spec happens, is the view from the
   community on making this type of change and bringing modules private (and
   associated concerns).

   Cheers,
   --Morgan

   Sent via mobile
   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Davanum Srinivas :: https://twitter.com/dims



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgpTexQ_GAsbY.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] potential issues with WSME 0.8.0

2015-08-26 Thread Lingxian Kong
Hi, Chris,

Thanks for bringing this up, and let us know. The same issues also affected
Mistral :-(

On Wed, Aug 26, 2015 at 1:38 AM, Chris Dent chd...@redhat.com wrote:


 WSME version 0.8.0 was released today with several fixes to error
 handling and error messages. These fixes make WSME behave more in
 the way it says it would like to behave (and should behave) with
 regard to input validation and HTTP handling. You want these
 changes.

 Unfortunately we've discovered since the release that it causes test
 failures in Ceilometer, Aodh and Ironic so it may also cause some
 issues in other services.

 The two main issues are:

 * More detailed input validation can result in the body of a 4xx
   response having changed to reflect increased detail of the
   problem. If you have tests which check this response body, they
   may now break.

 * Formerly, input validation would allow unused fields to pass through
   and be dropped. This is now, as a virtue of more strict processing
   throughout the validation handling, considered a client-side error.

 There may also be situations where a 500 had been returned in the
 past but now a more correct status code in the 4xx range is
 returned.

 Fixes for ceilometer and ironic are pending and may provide some
 guidance on fixes other projects might need to do:

 * Ironic: https://review.openstack.org/216802
 * Ceilometer: https://review.openstack.org/#/c/208467/

 --
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*Regards!*
*---*
*Lingxian Kong*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] jenkis job failures

2015-08-26 Thread Chris Dent

On Wed, 26 Aug 2015, Ryota Mibu wrote:


Quick note to ceilometer folks.

Many check and gate jobs for ceilometer are failed due to WSME related
issue that is already addressed by [1], so please make sure your patch
set are rebased on the current master before execute 'recheck'.


Thanks for posting about this.

I've gone through this morning and rebased some of the pending
patches that just needed a rebase to get moving.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] Need community guidance please...

2015-08-26 Thread Germy Lure
Hi,

Maybe I missed some key points. But why we introduced vpn-endpoint groups
here?

ipsec-site-connection for IPSec VPN only, gre-connection for GRE VPN
only, and mpls-connection for MPLS VPN only. You see, different
connections for different vpn types. Indeed, We can't reuse connection API.

Piece of the ref document(https://review.openstack.org/#/c/191944/) like
this:
allowing subnets (local) and CIDRs (peer) to be used for IPSec, but
routers, networks, and VLANs to be used for other VPN types (BGP, L2,
direct connection)

You see, different epg types for different vpn types. We can't reuse epg.

So, how we meet The third goal, is to do this in a manner that the code
can be reused for other flavors of VPN.?

Thanks.


On Tue, Aug 25, 2015 at 1:54 AM, Madhusudhan Kandadai 
madhusudhan.openst...@gmail.com wrote:

 My two cents..

 On Mon, Aug 24, 2015 at 8:48 AM, Jay Pipes jaypi...@gmail.com wrote:

 Hi Paul, comments inline...

 On 08/24/2015 07:02 AM, Paul Michali wrote:

 Hi,

 I'm working on the multiple local subnet feature for VPN (RFE
 https://bugs.launchpad.net/neutron/+bug/1459423), with a developer
 reference document detailing the proposed process
 (https://review.openstack.org/#/c/191944/). The plan is to do this in
 two steps. The first is to add new APIs and database support for
 endpoint groups (see dev ref for details). The second is to modify the
 IPSec/VPN APIs to make use of the new information (and no longer use
 some older, but equivalent info that is being extended).

 I have a few process/procedural questions for the community...

 Q1) Should I do this all as one huge commit, as two commits (one for
 endpoint groups and one for modification to support multiple local
 subnets), or multiple (chained) commits (e.g. commit for each API for
 the endpoint groups and for each part of the multiple subnet change)?

 My thought (now) is to do this as two commits, with the endpoint groups
 as one, and multiple subnet groups as a second. I started with a commit
 for create API of endpoint (212692), and then did a chained commit for
 delete/show/list (215717), thinking they could be reviewed in pieces,
 but they are not that large and could be easily merged.


 My advice would be 2 commits, as you have split them out.


 I would prefer to have two commits with end-point groups as one and
 modification to support multiple local subnets as another. This will be
 easy to troubleshoot when in need.


 Q2) If the two parts are done separately, should the endpoint group
 portion, which adds a table and API calls, be done as part of the
 existing version (v2) of VPN, instead of introducing a new version at
 that step?


 Is the Neutron VPN API microversioned? If not, then I suppose your only
 option is to modify the existing v2 API. These seem to be additive changes,
 not modifications to existing API calls, in which case they are
 backwards-compatible (just not discoverable via an API microversion).

 I suggest to be done as part of the existing version v2 API . As the api
 tests are in transition from neutron to neutron-vpnaas repo, we can modify
 the tests and submit as a one patch


 Q3) For the new API additions, do I create a new subclass for the
 interface that includes all the existing APIs, introduce a new class
 that is used together with the existing class, or do I add this to the
 existing API?


 Until microversioning is introduced to the Neutron VPN API, it should
 probably be a change to the existing v2 API.

 +1


 Q4) With the final multiple local subnet changes, there will be changes
 to the VPN service API (delete subnet_id arg) and IPSec connection API
 (delete peer_cidrs arg, and add local_endpoints and peer_endpoints
 args). Do we modify the URI so that it calls out v3 (versus v2)? Where
 do we do that?


 Hmm, with the backwards-incompatible API changes like the above, your
 only option is to increment the major version number. The alternative would
 be to add support for microversioning as a prerequisite to the patch that
 adds backwards-incompatible changes, and then use a microversion to
 introduce those changes.

 Right now, we are beefing up scenario tests for VPN, adding
 microversioning feature seems better option for me, but open to have
 reviews from community.


 Best,
 -jay

 I'm unsure of the mechanism of increasing the version.

 Thanks in advance for any guidance here on how this should be rolled
 out...

 Regards,

 Paul Michali (pc_m)



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [wsme] potential issues with WSME 0.8.0

2015-08-26 Thread Lucas Alvares Gomes
On Wed, Aug 26, 2015 at 9:27 AM, Chris Dent chd...@redhat.com wrote:
 On Wed, 26 Aug 2015, Dmitry Tantsur wrote:

 Note that this is an API breaking change, which can potentially break
 random users of all projects using wsme. I think we should communicate this
 point a bit louder, and I also believe it should have warranted a major
 version bump.


 Yeah, Lucas and I weren't actually aware of the specific impact of that
 change until after it was released; part of the danger of being cores-on-
 demand rather than cores-by-desire[1].

 I'll speak with him and dhellman later this morning and figure out
 the best thing to do.


+1, yeah I kinda agree with the major version bump. But also it's
important to note that Ironic which was affected by that was relying
on be able to POST nonexistent fields to create resources and WSME
would just ignore those on versions = 0.8.0. That's a legitimate bug
that have been fixed in WSME (and projects shouldn't have relied on
that in the first place).

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] potential issues with WSME 0.8.0

2015-08-26 Thread Renat Akhmerov
Ok, thanks. We’ll fix it in Mistral accordingly.

Renat Akhmerov
@ Mirantis Inc.



 On 26 Aug 2015, at 15:12, Dmitry Tantsur dtant...@redhat.com wrote:
 
 On 08/26/2015 11:05 AM, Lucas Alvares Gomes wrote:
 On Wed, Aug 26, 2015 at 9:27 AM, Chris Dent chd...@redhat.com wrote:
 On Wed, 26 Aug 2015, Dmitry Tantsur wrote:
 
 Note that this is an API breaking change, which can potentially break
 random users of all projects using wsme. I think we should communicate this
 point a bit louder, and I also believe it should have warranted a major
 version bump.
 
 
 Yeah, Lucas and I weren't actually aware of the specific impact of that
 change until after it was released; part of the danger of being cores-on-
 demand rather than cores-by-desire[1].
 
 I'll speak with him and dhellman later this morning and figure out
 the best thing to do.
 
 
 +1, yeah I kinda agree with the major version bump. But also it's
 important to note that Ironic which was affected by that was relying
 on be able to POST nonexistent fields to create resources and WSME
 would just ignore those on versions = 0.8.0. That's a legitimate bug
 that have been fixed in WSME (and projects shouldn't have relied on
 that in the first place).
 
 Note that I'm not talking about our projects, I'm talking about our users, 
 whose automation might become broken after the switch.
 
 
 Cheers,
 Lucas
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] incubator move to private modules

2015-08-26 Thread Thierry Carrez
Flavio Percoco wrote:
 On 25/08/15 06:01 -0400, Davanum Srinivas wrote:
 Morgan,

 Bit more radical :) I am inclined to just yank all code from
 oslo-incubator and
 let the projects modify/move what they have left into their own
 package/module
 structure (and change the contracts however they see fit).
 
 Glad this conversation is happening, I've started to think about this
 as well. I think we're at a point were we could just let projects move
 from were they are.
 
 However, I'd like this to be a bit more organized. For instance, if we
 dismiss oslo-incubator and let projects move forward on their own,
 it'd be better to have all the `openstack/common/` packages renamed so
 that it'll create less confusion to newcomers. At the very least, as
 Morgan mentioned, these packages could be prefixed with an `_` and
 become 'private' and 'owned' by the project.
 
 We still need a 'deprecation' process for the code in the
 oslo-incubator repository and we would still have to accept fixes for
 previous releases.

The Death of the Incubator. Sounds like a great thing to discuss at the
Design Summit. I don't think we would kill it before start of Mitaka
anyway ?

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] incubator move to private modules

2015-08-26 Thread Davanum Srinivas
yep ttx. this will be for Mitaka.

-- Dims

On Wed, Aug 26, 2015 at 6:18 AM, Thierry Carrez thie...@openstack.org
wrote:

 Flavio Percoco wrote:
  On 25/08/15 06:01 -0400, Davanum Srinivas wrote:
  Morgan,
 
  Bit more radical :) I am inclined to just yank all code from
  oslo-incubator and
  let the projects modify/move what they have left into their own
  package/module
  structure (and change the contracts however they see fit).
 
  Glad this conversation is happening, I've started to think about this
  as well. I think we're at a point were we could just let projects move
  from were they are.
 
  However, I'd like this to be a bit more organized. For instance, if we
  dismiss oslo-incubator and let projects move forward on their own,
  it'd be better to have all the `openstack/common/` packages renamed so
  that it'll create less confusion to newcomers. At the very least, as
  Morgan mentioned, these packages could be prefixed with an `_` and
  become 'private' and 'owned' by the project.
 
  We still need a 'deprecation' process for the code in the
  oslo-incubator repository and we would still have to accept fixes for
  previous releases.

 The Death of the Incubator. Sounds like a great thing to discuss at the
 Design Summit. I don't think we would kill it before start of Mitaka
 anyway ?

 --
 Thierry Carrez (ttx)


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Branching strategy vs feature freeze

2015-08-26 Thread Igor Marnat
Thierry, Dmitry,
key point is that we in Fuel need to follow as much community adopted
process as we can, and not to introduce something Fuel specific. We
need not only to avoid forking code, but also to avoid forking
processes and approaches for Fuel.

Both #2 and #3 allow it, making it easier for contributors to
participate in the Fuel.

#3 (having internal repos for pre-release preparation, bug fixing and
small custom features) from community perspective is the same as #2,
provided that all the code from these internal repos always ends up
being committed upstream. Purely internal logistics.

The only one additional note from my side is that we might want to
consider an approach slightly adopted similar to what's there in
Puppet modules. AFAIK, they are typically released several weeks later
than Openstack code. This is natural for Fuel as it depends on these
modules and packaging of Openstack.

Regards,
Igor Marnat


On Wed, Aug 26, 2015 at 1:04 PM, Thierry Carrez thie...@openstack.org wrote:
 Dmitry Borodaenko wrote:
 TL;DR was actually hidden in the middle of the email, here's an even
 shorter version:

 0) we're suffering from closing master for feature work for too long

 1) continuously rebased future branch is most likely a no-go

 2) short FF (SCF and stable branch after 2 weeks) is an option for 8.0

 3) short FF with stable in a separate internal gerrit was also proposed

 4) merits and cost of enabling CI setup for private forks should be
 carefully considered independently from other options

 Should come as no surprise that I encourage you to follow (2),
 especially as we work to further align Fuel with OpenStack ways so that
 it can be added as an official project team.

 Note that the two weeks is not really a hard requirement (could be
 more, or less). In this model you need to come up with a release
 candidate, which is where we create the release branch, which becomes a
 stable branch at the end of the cycle. It usually takes 2 to 4 weeks for
 OpenStack projects to get to that stage, but you could get there in 2
 days or 5 weeks and it would still work (the key is to publish at least
 one release candidate before the end of the cycle).

 It's a balance between the pain of backporting fixes and the pain of
 freezing master. At one point the flow of fixes slows down enough and/or
 the pressure to unfreeze master becomes too strong: that's when you
 should create the release branch.

 Hope this helps,

 --
 Thierry Carrez (ttx)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] Need community guidance please...

2015-08-26 Thread Paul Michali
See @PCM inline...


On Wed, Aug 26, 2015 at 4:44 AM Germy Lure germy.l...@gmail.com wrote:

 Hi,

 Maybe I missed some key points. But why we introduced vpn-endpoint groups
 here?


@PCM For the multiple local subnet capabilities for IPSec, the existing API
would need to be changed, so that we can specify 1+ local subnets, as part
of the VPN connection. This would have been the simplest approach for
updating IPSec connections, however, it makes it a point solution.

Other VPN flavors would also have similar endpoint specifications in
their connection APIs.

The approach that I'm advocating, is to extract out some of the commonality
between VPN flavors such that we can have some reuse.

Essentially, the idea is to break VPN into two parts. One is what is
connected and the other is how the connection is made.

For the former, the idea of an endpoint group is introduced. It provides a
collection of endpoints that can be identified by a type (e.g. subnet,
CIDR, network, vlan, ...) and an ID.

The latter would be VPN flavor specific, having all of the details needed
for that type of VPN connection and would reference the needed endpoint
group(s) by ID.

This separates the what from the how.



 ipsec-site-connection for IPSec VPN only, gre-connection for GRE VPN
 only, and mpls-connection for MPLS VPN only. You see, different
 connections for different vpn types. Indeed, We can't reuse connection API.


@PCM Correct. The how is VPN type specific. But we can have a common API
for the what.




 Piece of the ref document(https://review.openstack.org/#/c/191944/) like
 this:
 allowing subnets (local) and CIDRs (peer) to be used for IPSec, but
 routers, networks, and VLANs to be used for other VPN types (BGP, L2,
 direct connection)

 You see, different epg types for different vpn types. We can't reuse epg.


@PCM We're not reusing the endpoint group itself for different types of
VPN, we're reusing the API for different types of VPN. A common API that
holds a collection of endpoints of a specific type.

You can look at the code out for review, for a feel for the implementation
being worked on:  https://review.openstack.org/#/c/212692/3



 So, how we meet The third goal, is to do this in a manner that the code
 can be reused for other flavors of VPN.?


@PCM The code for the endpoint group API could be used for other VPN types.
Instead of them creating table fields (and the corresponding db logic) for
the endpoints of their connection, they can refer to the ID(s) from the
endpoint groups table, and can add additional validation based on the VPN
type.

FYI, I pushed up version 7 of the dev ref document yesterday.

Regards,

PCM


 Thanks.


 On Tue, Aug 25, 2015 at 1:54 AM, Madhusudhan Kandadai 
 madhusudhan.openst...@gmail.com wrote:

 My two cents..

 On Mon, Aug 24, 2015 at 8:48 AM, Jay Pipes jaypi...@gmail.com wrote:

 Hi Paul, comments inline...

 On 08/24/2015 07:02 AM, Paul Michali wrote:

 Hi,

 I'm working on the multiple local subnet feature for VPN (RFE
 https://bugs.launchpad.net/neutron/+bug/1459423), with a developer
 reference document detailing the proposed process
 (https://review.openstack.org/#/c/191944/). The plan is to do this in
 two steps. The first is to add new APIs and database support for
 endpoint groups (see dev ref for details). The second is to modify the
 IPSec/VPN APIs to make use of the new information (and no longer use
 some older, but equivalent info that is being extended).

 I have a few process/procedural questions for the community...

 Q1) Should I do this all as one huge commit, as two commits (one for
 endpoint groups and one for modification to support multiple local
 subnets), or multiple (chained) commits (e.g. commit for each API for
 the endpoint groups and for each part of the multiple subnet change)?

 My thought (now) is to do this as two commits, with the endpoint groups
 as one, and multiple subnet groups as a second. I started with a commit
 for create API of endpoint (212692), and then did a chained commit for
 delete/show/list (215717), thinking they could be reviewed in pieces,
 but they are not that large and could be easily merged.


 My advice would be 2 commits, as you have split them out.


 I would prefer to have two commits with end-point groups as one and
 modification to support multiple local subnets as another. This will be
 easy to troubleshoot when in need.


 Q2) If the two parts are done separately, should the endpoint group
 portion, which adds a table and API calls, be done as part of the
 existing version (v2) of VPN, instead of introducing a new version at
 that step?


 Is the Neutron VPN API microversioned? If not, then I suppose your only
 option is to modify the existing v2 API. These seem to be additive changes,
 not modifications to existing API calls, in which case they are
 backwards-compatible (just not discoverable via an API microversion).

 I suggest to be done as part of the existing version v2 API . As the api
 tests are in 

Re: [openstack-dev] [Openstack-operators] PLEASE READ: VPNaaS API Change - not backward compatible

2015-08-26 Thread Paul Michali
James,

Great stuff! Please see @PCM in-line...



On Tue, Aug 25, 2015 at 6:26 PM James Dempsey jam...@catalyst.net.nz
wrote:

 Oops, sorry about the blank email.  Answers/Questions in-line.

 On 26/08/15 07:46, Paul Michali wrote:
  Previous post only went to dev list. Ensuring both and adding a bit
 more...
 
 
 
  On Tue, Aug 25, 2015 at 8:37 AM Paul Michali p...@michali.net wrote:
 
  Xav,
 
  The discussion is very important, and hence why both Kyle and I have
 been
  posting these questions on the operator (and dev) lists. Unfortunately,
 I
  wasn't subscribed to the operator's list and missed some responses to
  Kyle's message, which were posted only to that list.
 
  As a result, I had an incomplete picture and posted this thread to see
 if
  it was OK to do this without backward compatibility, based on the
  (incorrect) assumption that there was no production use. That is
 corrected
  now, and I'm getting all the messages and thanks to everyone, have
 input on
  messages I missed.
 
  So given that, let's try a reset on the discussion, so that I can better
  understand the issues...
 

 Great!  Thanks very much for expanding the scope.  We really appreciate it.

  Do you feel that not having backward compatibility (but having a
 migration
  path) would seriously affect you or would it be manageable?

 Currently, this feels like it would seriously affect us.  I don't feel
 confident that the following concerns won't cause us big problems.


 As Xav mentioned previously, we have a few major concerns:


@PCM Thanks! It's good to know what things we need to be aware of.




 1) Horizon compatibility

 We run a newer version of horizon than we do neutron.  If Horizon
 version X doesn't work with Neutron version X-1, this is a very big
 problem for us.


@PCM Interesting. I always thought that Horizon updates lagged Neutron
changes, and this wouldn't be a concern.




 2) Service interruption

 How much of a service interruption would the 'migration path' cause?


@PCM The expectation of the proposal is that the migration would occur as
part of the normal OpenStack upgrade process (new services installed,
current services stopped, database migration occurs, new services are
started).

It would have the same impact as what would happen today, if you update
from one release to another. I'm sure you folks have a much better handle
on that impact and how to handle it (maintenance windows, scheduled
updates, etc).


We
 all know that IPsec VPNs can be fragile...  How much of a guarantee will
 we have that migration doesn't break a bunch of VPNs all at the same
 time because of some slight difference in the way configurations are
 generated?


@PCM I see the risk as extremely low. With the migration, the end result is
really just moving/copying fields from one table to another. The underlying
configuration done to *Swan would be the same.

For example, the subnet ID, which is specified in the VPN service API and
stored in the vpnservices table, would be stored in a new vpn_endpoints
table, and the ipsec_site_connections table would reference that entry
(rather than looking up the subnet in the vpnservices table).



 3) Heat compatibility

 We don't always run the same version of Heat and Neutron.


@PCM I must admit, I've never used Heat, and am woefully ignorant about it.
Can you elaborate on Heat concerns as may be related to VPN API differences?

Is Heat being used to setup VPN connections, as part of orchestration?




 
  Is there pain for the customers beyond learning about the new API
 changes
  and capabilities (something that would apply whether there is backward
  compatibility or not)?
 

 See points 1,2, and 3 above.


 
  Another implication of not having backwards compatibility would be that
  end-users would need to immediately switch to using the new API, once the
  migration occurs, versus doing so on their own time frame.  Would this
 be a
  concern for you (customers not having the convenience of delaying their
  switch to the new API)?
 
 
  I was thinking that backward incompatible changes would adversely affect
  people who were using client scripts/apps to configure (a large number
 of)
  IPsec connections, where they'd have to have client scripts/apps
 in-place
  to support the new API.
 

 This is actually less of a concern.  We have found that VPN creation is
 mostly done manually and anyone who is clever enough to make IPsec go is
 clever enough to learn a new API/horizon interface.


@PCM Do you see much reliance on tooling to setup VPN (such that having to
update the tooling would be a concern for end-users), or is this something
that could be managed through process/preparation?




 
  Which is more of a logistics issue, and could be managed, IMHO.
 
 
 
 
  Would there be customers that would fall into that category, or are
  customers manually configuring IPSec connections in that they could just
  use the new API?
 

 Most customers could easily adapt to a new API.

 
  

[openstack-dev] [DevStack] Proposing a revert for $OVS_DATAPATH_TYPE

2015-08-26 Thread Sean M. Collins
Hi,

This was a bit of a mistake on my part, I recently put together some
hardware in my home lab to try and rebuild a lab that is similar to the
lab I had, where I developed the support for provider networking, which
was documented in the following guide:

http://docs.openstack.org/developer/devstack/guides/neutron.html#neutron-networking-with-open-vswitch-and-provider-networks

With the new hardware I have, I tried to configure a DevStack install,
and followed my original guide. However, the patch that introduced
$OVS_DATAPATH_TYPE currently causes the Neutron agent to fail to start,
since unless we define $OVS_DATAPATH_TYPE, $OVS_PHYSICAL_BRIDGE is never
created.

I should have caught this during my original review of the code, but I
wasn't careful enough.

I am proposing a revert, due to the lack of documentation about
$OVS_DATAPATH_TYPE.

https://review.openstack.org/217202

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mock 1.3 breaking all of Kilo in Sid (and other cases of this kind)

2015-08-26 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/26/2015 03:34 PM, Thomas Goirand wrote:
 On 08/25/2015 08:59 PM, Matt Riedemann wrote:
 
 
 On 8/25/2015 10:04 AM, Thomas Goirand wrote:
 On 08/25/2015 03:42 PM, Thomas Goirand wrote:
 Hi, [...] Anyway, the result is that mock 1.3 broke 9
 packages at least in Kilo, currently in Sid [1]. Maybe, as
 packages gets rebuilt, I'll get more bug reports. This
 really, is a depressing situation. [...]
 
 Some ppl on IRC explained to me what the situation was, which
 is that the mock API has been wrongly used, and some tests were
 in fact wrongly passing, so indeed, this is one of the rare
 cases where breaking the API probably made sense.
 
 As it doesn't bring anything to repair these tests, I'm just
 not running them in Kilo from now on, using something like
 this:
 
 --subunit 'tests\.unit\.(?!.*foo.*)
 
 Please comment if you think that's the wrong way to go. Also,
 has some of these been repaired in the stable/kilo branch?
 
 I seem to remember some projects backporting the test fixes to 
 stable/kilo but ultimately we just capped mock on the stable
 branches to avoid this issue there.
 
 I really wish that nothing of this kind was even possible. Adding
 such an upper cap is like hiding the dust under the carpet: it
 doesn't remove the issue, it just hides it. We really have too much
 of these in OpenStack. Fixing broken tests was the correct thing to
 do, IMO.
 

I think whoever is interested in raising a cap is free to send patches
and get it up. I don't think those patches would be blocked.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJV3coRAAoJEC5aWaUY1u57FBsH/05W80HXQRLlGARiN4K5SA8T
kC8dVlKx7OPcg/XY77GMHn/oacPErXcPQFreWW1EHwFpIFePNroE1mrwZjIkgy5L
ehsn/I7B3lhKLq3yqlE+MdyoeCcgXBW/Hi4DzMGEu+Os59dYc+LrO5vAjEieoU50
SsqfsHBoJo4SjtgoJdp0Q/dlaVlXuetCF5I/DWvhvJVrYuJBHIFjORTjkc6RZOOU
Ke+bBRjbxJFYcTDWlE8AHzssfIDCnYlDv9+pFv+JO+tCqxIhiOraVxq+sD60fJww
pExbjkZikhrRaqzzdLnYm0/ZDNzPS/UO+JSEZPFwu/pUGc7ztB/7+1PFf2oyftI=
=TEyG
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] python-novaclient release 2.27.0 planned for 9/1

2015-08-26 Thread Matt Riedemann
The last python-novaclient release (2.26.0) was released on 6/3.  Soft 
dependency freeze is Thursday 9/3.  So we plan on doing a 
python-novaclient 2.27.0 release on Tuesday 9/1.


As of today, this is the list of changes since 2.26.0:

http://paste.openstack.org/show/428347/

Andrey requested that we review [1] before cutting the release.

If there are any other high priority changes that you think need to get 
into that release, please bring them up in IRC, preferably by the end of 
this week.


[1] 
https://review.openstack.org/#/q/status:open+project:openstack/python-novaclient+branch:master+topic:list,n,z


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Keystone resource naming with domain support - no '::domain' if 'Default'

2015-08-26 Thread Morgan Fainberg
This seems quite reasonable. +1

Sent via mobile

 On Aug 25, 2015, at 13:30, Rich Megginson rmegg...@redhat.com wrote:
 
 This concerns the support of the names of domain scoped Keystone resources 
 (users, projects, etc.) in puppet.
 
 At the puppet-openstack meeting today [1] we decided that puppet-openstack 
 will support Keystone domain scoped resource names without a '::domain' in 
 the name, only if the 'default_domain_id' parameter in Keystone has _not_ 
 been set.  That is, if the default domain is 'Default'.  In addition:
 
 * In the OpenStack L release, if 'default_domain_id' is set, puppet will 
 issue a warning if a name is used without '::domain'.
 * In the OpenStack M release, puppet will issue a warning if a name is used 
 without '::domain', even if 'default_domain_id' is not set.
 * In N (or possibly, O), resource names will be required to have '::domain'.
 
 The current spec [2] and current code [3] try to support names without a 
 '::domain' in the name, in non-default domains, provided the name is unique 
 across _all_ domains.  This will have to be changed in the current code and 
 spec.
 
 
 [1] 
 http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-08-25-15.01.html
 [2] 
 http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/api-v3-support.html
 [3] 
 https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_user/openstack.rb#L217
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Proposing a revert for $OVS_DATAPATH_TYPE

2015-08-26 Thread Sean M. Collins
Ah, I did not have enough coffee, my control node wasn't synced and
didn't pick up PHYSICAL_NETWORK - that'll do it. Now I look like a
moron.

Sorry for the panic I may have caused
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] [infra] How to auto-generate stable release notes

2015-08-26 Thread Doug Hellmann
Excerpts from Robert Collins's message of 2015-08-19 11:04:37 +1200:
 On 18 August 2015 at 01:46, Thierry Carrez thie...@openstack.org wrote:
 
 Following on from the IRC discussion about release notes.
 
  ttx * we want consumers of the stable branch (from random commit
 and from tagged versions) to get a valid release notes document
  ttx * Some things in that document (like OSSA numbers) we only know
 after the commit is pushed so we need a way to retroactively fix those
 
 So 'random commit' here is a point in time - I think its reasonable to
 assert that if a commit isn't tagged, its release notes are not final.
 
 So before I dive into detail, here's the basic design I think we can use.
 
 1) release notes are derived from the tree/git history. We might
 eventually use both, for instance.
 2) commits are needed to change release notes.
 3) Notes are mutable in that a clone today vs a clone tomorrow might
 have different release notes about the same change.
 4) Notes are immutable in that for a given git hash/tag the release
 notes will be the same. Tagging a commit will change the version
 description but that is all.
 
 Design assumptions:
 - We want to avoid merge hell when sheparding in a lot of
 release-note-worthy changes, which we expect to happen on stable
 branches always, and at release times on master branches.
 - We want writing a release note to be straight forward
 - We do not want release notes to be custom ordered within a release.
 As long as the ordering is consistent it is ok.
 - We must be able to entirely remove a release note.
 - We must not make things progressively slow down to a crawl over
 years of usage.

Adding a couple of requirements, based on our discussion yesterday:

- Release note authors shouldn't need to know any special values for
  naming their notes files (i.e., no change id or SHA value that has
  special meaning).
- It would be nice if it was somewhat easy to identify the file
  containing a release note on a particular topic.

 
 Proposed data structure:
 - create a top level directory in each repo called release-notes
 - within that create a subdirectory called changes.
 - within the release-notes dir we place yaml files containing the
 release note inputs.
 - within the 'changes' subdirectory, the name of the yaml file will be
 the gerrit change id in a canonical form.
E.g. I1234abcd.yaml
This serves two purposes: it guarantees file name uniqueness (no
 merge conflicts) and lets us
determine which release to group it in (the most recent one, in
 case of merge+revert+merge patterns).

We changed this to using a long enough random number as a prefix, with
a slug value provided by the release note author to help identify what
is in the file.

I think maybe 8 hex digits for the prefix. Or should we go full UUID?

The slug might be something like bug  or removing option foo
which would be converted to a canonical form, removing whitespace in the
filename. The slug can be changed, to allow for fixing typos, but the
prefix needs to remain the same in order for the note to be recognized
as the same item.

 - we also create files that roll up all the notes within history
 versions - named by the version. e.g. release-notes/1.2.3.yaml.

It's not clear why we need this, can you elaborate?

 
 Yaml schema:
 prelude: prelude prose
 notes:
  - note1
  - note2
  - note3

Thierry proposed 3 classifications for notes, which would then be
ordered separately: critical, security, and other. So I propose:

  prelude: 
RST-formatted prelude text
  critical:
- note 1
- note 2
  security:
- note 1
- note 2
  other:
- note 1
- note 2

 
 Processing:
 1) determine the revisions we need to generate release notes for. By
 default generate notes for the current minor release. (E.g. if the
 tree version is 1.2.3.dev4 we would generate release notes for 1.2.0,
 1.2.1, 1.2.2, 1.2.3[which dev4 is leading up to]).

Is there any reason, on a given branch, to not generate the notes for
the entire history of that branch back to the beginning of the project?

 2) For each version: scan all the commits to determine gerrit change-id's.
  i) read in all those change ids .yaml files and pull out any notes within 
 them.
  ii) read in any full version yaml file (and merge in its contained notes)
  iii) Construct a markdown document as follows:
   a) Sort any preludes (there should be only one at most, but lets not
 error if there are multiple)

Rather than sorting, which would change the order of notes as new items
are added, what about listing them in an order based on when they were
added in the history? We can track the first time a note file appears,
so we can maintain the order even if a note is modified.

   b) sort any notes
   c) for each note transform it into a bullet point by prepending its
 first line with '- ' and every subsequent line with '  ' (two spaces
 to form a hanging indent).
   d) strip any trailing \n's from everything.
   e) join 

[openstack-dev] [grenade][cinder] Updates of rootwrap filters

2015-08-26 Thread Dulko, Michal
Hi,

Recently when working on a simple bug [1] I've run into a need to change 
rootwrap filters rules for a few commands. After sending fix to Gerrit [2] it 
turns out that when testing the upgraded cloud grenade haven't copied my 
updated volume.filters file, and therefore failed the check. I wonder how 
should I approach the issue:
1. Make grenade script for Cinder to copy the new file to upgraded cloud.
2. Divide the patch into two parts - at first add new rules, leaving the old 
ones there, then fix the bug and remove old rules.
3. ?

Any opinions?

[1] https://bugs.launchpad.net/cinder/+bug/1488433
[2] https://review.openstack.org/#/c/216675/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] dependencies problem on different release

2015-08-26 Thread Gareth
Hey guys,

I have a question about dependencies. There is an example:

On 2014.1, project A is released with its dependency in requirements.txt
which contains:

foo=1.5.0
bar=2.0.0,2.2.0

and half a year later, the requirements.txt changes to:

foo=1.7.0
bar=2.1.0,2.2.0

It looks fine, but potential change would be upstream version of package
foo and bar become 2.0.0 and 3.0.0 (major version upgrade means there are
incompatible changes).

For bar, there will be no problems, because 2.2.0 limit the version from
major version changes. But with 2.0.0 foo, it will break the installation
of 2014.1 A, because current development can't predict every incompatible
changes in the future.

A real example is to enable Rally for OpenStack Juno. Rally doesn't support
old release officially but I could checkout its codes to the Juno release
date which make both codes match. However even if I use the old
requirements.txt to install dependencies, there must be many packages are
installed as upstream versions and some of them breaks. An ugly way is to
copy pip list from old Juno environment and install those properly. I hope
there are better ways to do this work. Anyone has smart ideas?

-- 
Gareth

*Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
*OpenStack contributor, kun_huang@freenode*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] CI for reliable live-migration

2015-08-26 Thread Timofei Durakov
Hello,

Here is the situation: nova has live-migration feature but doesn't have ci
job to cover it by functional tests, only
gate-tempest-dsvm-multinode-full(non-voting,
btw), which covers block-migration only.
The problem here is, that live-migration could be different, depending on
how instance was booted(volume-backed/ephemeral), how environment is
configured(is shared instance directory(NFS, for example), or RBD used to
store ephemeral disk), or for example user don't have that and is going to
use --block-migrate flag. To claim that we have reliable live-migration in
nova, we should check it at least on envs with rbd or nfs as more popular
than envs without shared storages at all.
Here is the steps for that:

   1. make  gate-tempest-dsvm-multinode-full voting, as it looks OK for
   block-migration testing purposes;
   2. contribute to tempest to cover volume-backed instances live-migration;
   3. make another job with rbd for storing ephemerals, it also requires
   changing tempest config;
   4. make job with nfs for ephemerals.

These steps should help us to improve current situation with
live-migration.

--
Timofey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] devstack/heat problem with master_wait_condition

2015-08-26 Thread Sergey Kraynev
Hi Stanislaw,

Your host with Fedora should have special config file, which will send
signal to WaitCondition.
For good example please take a look this template
 
https://github.com/openstack/heat-templates/blob/819a9a3fc9d6f449129c8cefa5e087569340109b/hot/native_waitcondition.yaml
https://github.com/openstack/heat-templates/blob/819a9a3fc9d6f449129c8cefa5e087569340109b/hot/native_waitcondition.yaml

Also the best place for such question I suppose will be
https://ask.openstack.org/en/questions/
https://ask.openstack.org/en/questions/

Regards,
Sergey.

On 26 August 2015 at 09:23, Pitucha, Stanislaw Izaak 
stanislaw.pitu...@hp.com wrote:

 Hi all,

 I’m trying to stand up magnum according to the quickstart instructions
 with devstack.

 There’s one resource which times out and fails: master_wait_condition. The
 kube master (fedora) host seems to be created, I can login to it via ssh,
 other resources are created successfully.



 What can I do from here? How do I debug this? I tried to look for the
 wc_notify itself to try manually, but I can’t even find that script.



 Best Regards,

 Stanisław Pitucha



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] potential issues with WSME 0.8.0

2015-08-26 Thread Chris Dent

On Wed, 26 Aug 2015, Renat Akhmerov wrote:


What would be your recommendation for now? Just to cap wsme version
and hold on with changes adjusting to WSME 0.8.0? Or you think most
likely these changes in new WSME will remain on?


The fixes in WSME are fixes for genuine bugs. If code using WSME was
relying on that behavior it is probably best that it be fixed.

For example if you have code that is silently accepting invalid
fields in a POST, you probably don't want that.

The other changes are primarily related to the error messages that
are generated when incoming data (usually via a POST) is determined
to be invalid.

If you're unable to change the code calling WSME to reflect these
changes then pinning to 0.8.0 is a potential solution.

In ceilometer and aodh it took 30 minutes to find and resolve the
issues related to the changes.

If you need some help with that you can find me (or lucas) in the
#wsme channel and we can help you figure out what changes might be
needed to fix any failing tests.

Dealing with this is a pain, but I think it is worthwhile: Those
services that are using WSME (for now) need to be able to rely on it
doing what it says it is supposed to be doing, not accidental
behaviors that are present as a result of bugs.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] potential issues with WSME 0.8.0

2015-08-26 Thread Dmitry Tantsur

On 08/26/2015 11:05 AM, Lucas Alvares Gomes wrote:

On Wed, Aug 26, 2015 at 9:27 AM, Chris Dent chd...@redhat.com wrote:

On Wed, 26 Aug 2015, Dmitry Tantsur wrote:


Note that this is an API breaking change, which can potentially break
random users of all projects using wsme. I think we should communicate this
point a bit louder, and I also believe it should have warranted a major
version bump.



Yeah, Lucas and I weren't actually aware of the specific impact of that
change until after it was released; part of the danger of being cores-on-
demand rather than cores-by-desire[1].

I'll speak with him and dhellman later this morning and figure out
the best thing to do.



+1, yeah I kinda agree with the major version bump. But also it's
important to note that Ironic which was affected by that was relying
on be able to POST nonexistent fields to create resources and WSME
would just ignore those on versions = 0.8.0. That's a legitimate bug
that have been fixed in WSME (and projects shouldn't have relied on
that in the first place).


Note that I'm not talking about our projects, I'm talking about our 
users, whose automation might become broken after the switch.




Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] L7 - Tasks

2015-08-26 Thread Samuel Bercovici
Hi,

I think that Evgeny is trying to complete everything bedsides the reference 
implementation (API, CLI, Tempest, etc.).
Evgeny will join the Octavia IRC meeting so it could be a good opportunity to 
get status and sync activities.
As far as I know 8/31 is feature freeze and not code complete. Please correct 
me, if I am wrong.

-Sam.



-Original Message-
From: Eichberger, German [mailto:german.eichber...@hp.com] 
Sent: Wednesday, August 26, 2015 2:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] L7 - Tasks

Hi Evgeny,

Of course we would love to have L7 in Liberty but that window is closing on 
8/31. We usually monitor the progress (via Stephen) at the weekly Octavia 
meeting. Stephen indicated that we won't get it before the L3 deadline and with 
all the open items it might still be tight. I am wondering if you can advise on 
that.

Thanks,
German

From: Evgeny Fedoruk evge...@radware.commailto:evge...@radware.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, August 25, 2015 at 9:33 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron][lbaas] L7 - Tasks

Hello

I would like to know if there is a plan for L7 extension work for Liberty There 
is an extension patch-set here https://review.openstack.org/#/c/148232/
We will also need to do a CLI work which I started to do and will commit 
initial patch-set soon Reference implementation was started by Stephen here 
https://review.openstack.org/#/c/204957/
and tempest tests update should be done as well I do not know if it was 
discussed at IRC meetings.
Please share your thought about it.


Regards,
Evg


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn][tempest] devstack: gate-tempest-dsvm-networking-ovn failures in Openstack CI

2015-08-26 Thread Russell Bryant
On 08/25/2015 03:02 PM, Assaf Muller wrote:
 
 
 On Tue, Aug 25, 2015 at 2:15 PM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
 
 On 08/25/2015 01:26 PM, Amitabha Biswas wrote:
  Russell suggested removing the MYSQL_DRIVER=MySQL-python declaration
  from local.conf https://review.openstack.org/#/c/216413/which
 results in
  PyMySQL as the default.
 
  With the above change the above DB errors are no longer seen in my local
  setup,
 
 It's great to hear that resolved the errors you saw!
 
   1. Is there any impact of using PyMySQL for the Jenkins check and
 gates.
 
 As Jeremy mentioned, this is what everything else is using (and what OVN
 was automatically already using in OpenStack CI).
 
   2. Why is the gate-networking-ovn-python27**failing (the past
 couple of
  commits) in {0}
  
 networking_ovn.tests.unit.test_ovn_plugin.TestOvnPlugin.test_create_port_security
  [0.194020s] ... FAILED. Do we need another conversation to track 
 this?
 
 This is a separate issue.  The networking-ovn git repo has been pretty
 quiet the last few weeks and it seems something has changed that made
 our tests break.  We inherit a lot of base plugin tests from neutron, so
 it's probably some change in Neutron that we haven't synced with yet.  I
 haven't had time to dig into it yet.
 
 
 This patch was recently merged to Neutron:
 https://review.openstack.org/#/c/201141/
 
 Looks like that unit test is trying to create a port with an invalid MAC
 address.
 I pushed a fix here:
 https://review.openstack.org/#/c/216837/

Thanks, Assaf!  Much appreciated!  :-)

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mock 1.3 breaking all of Kilo in Sid (and other cases of this kind)

2015-08-26 Thread Thomas Goirand
On 08/25/2015 08:59 PM, Matt Riedemann wrote:
 
 
 On 8/25/2015 10:04 AM, Thomas Goirand wrote:
 On 08/25/2015 03:42 PM, Thomas Goirand wrote:
 Hi,
 [...]
 Anyway, the result is that mock 1.3 broke 9 packages at least in Kilo,
 currently in Sid [1]. Maybe, as packages gets rebuilt, I'll get more bug
 reports. This really, is a depressing situation.
 [...]

 Some ppl on IRC explained to me what the situation was, which is that
 the mock API has been wrongly used, and some tests were in fact wrongly
 passing, so indeed, this is one of the rare cases where breaking the API
 probably made sense.

 As it doesn't bring anything to repair these tests, I'm just not running
 them in Kilo from now on, using something like this:

 --subunit 'tests\.unit\.(?!.*foo.*)

 Please comment if you think that's the wrong way to go. Also, has some
 of these been repaired in the stable/kilo branch?
 
 I seem to remember some projects backporting the test fixes to
 stable/kilo but ultimately we just capped mock on the stable branches to
 avoid this issue there.

I really wish that nothing of this kind was even possible. Adding such
an upper cap is like hiding the dust under the carpet: it doesn't remove
the issue, it just hides it. We really have too much of these in
OpenStack. Fixing broken tests was the correct thing to do, IMO.

Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] now using a grenade plugin

2015-08-26 Thread Sean Dague
On 08/26/2015 05:14 AM, Chris Dent wrote:
 
 Since it provides an opportunity to do some interesting things I
 thought I should announce that ceilometer has left the core of
 grenade and is now running its own 'gate-grenade-dsvm-ceilometer'
 job that uses a grenade plugin hosted in the ceilometer repo.
 
 At the moment the plugin does the bare minimum of checks to confirm that
 the upgrade was successful (in devstack/upgrade/upgrade.sh).
 
 Now that the plugin is in repo we have a lot of flexibility to do
 whatever we want and we should.
 

Thanks for all the hard work to make this happen.

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ANN] OpenStack Kilo on Ubuntu fully automated with Ansible! Ready for NFV L2 Bridges via Heat!

2015-08-26 Thread Kevin Carter
+1 I'd love to know this too. 

Additionally, if vagrant is something that is important to folks in the greater 
community it would be great to get some of those bits upstreamed. 

Per the NFV options, I don't see much in the way of OSAD not being able to 
support that presently, its really a matter of introducing the new 
configuration options/package additions however I may be missing something 
based on a cursory look at your published repository. If you're interested, I 
would be happy to help you get the capabilities into OSAD so that the greater 
community can benefit from some of the work you've done.

--

Kevin Carter
IRC: cloudnull



From: Thierry Carrez thie...@openstack.org
Sent: Wednesday, August 26, 2015 5:15 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [ANN] OpenStack Kilo on Ubuntu fully automated 
with Ansible! Ready for NFV L2 Bridges via Heat!

Martinx - ジェームズ wrote:
  I'm proud to announce an Ansible Playbook to deploy OpenStack on Ubuntu!
  Check it out!
  * https://github.com/sandvine/os-ansible-deployment-lite

How does it compare with the playbooks developed as an OpenStack project
by the OpenStackAnsible team[1] ?

Any benefit, difference ? Anything you could contribute back there ? Any
value in merging the two efforts ?

[1] http://governance.openstack.org/reference/projects/openstackansible.html

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mock 1.3 breaking all of Kilo in Sid (and other cases of this kind)

2015-08-26 Thread Matt Riedemann



On 8/26/2015 9:15 AM, Ihar Hrachyshka wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/26/2015 03:34 PM, Thomas Goirand wrote:

On 08/25/2015 08:59 PM, Matt Riedemann wrote:



On 8/25/2015 10:04 AM, Thomas Goirand wrote:

On 08/25/2015 03:42 PM, Thomas Goirand wrote:

Hi, [...] Anyway, the result is that mock 1.3 broke 9
packages at least in Kilo, currently in Sid [1]. Maybe, as
packages gets rebuilt, I'll get more bug reports. This
really, is a depressing situation. [...]


Some ppl on IRC explained to me what the situation was, which
is that the mock API has been wrongly used, and some tests were
in fact wrongly passing, so indeed, this is one of the rare
cases where breaking the API probably made sense.

As it doesn't bring anything to repair these tests, I'm just
not running them in Kilo from now on, using something like
this:

--subunit 'tests\.unit\.(?!.*foo.*)

Please comment if you think that's the wrong way to go. Also,
has some of these been repaired in the stable/kilo branch?


I seem to remember some projects backporting the test fixes to
stable/kilo but ultimately we just capped mock on the stable
branches to avoid this issue there.


I really wish that nothing of this kind was even possible. Adding
such an upper cap is like hiding the dust under the carpet: it
doesn't remove the issue, it just hides it. We really have too much
of these in OpenStack. Fixing broken tests was the correct thing to
do, IMO.



I think whoever is interested in raising a cap is free to send patches
and get it up. I don't think those patches would be blocked.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJV3coRAAoJEC5aWaUY1u57FBsH/05W80HXQRLlGARiN4K5SA8T
kC8dVlKx7OPcg/XY77GMHn/oacPErXcPQFreWW1EHwFpIFePNroE1mrwZjIkgy5L
ehsn/I7B3lhKLq3yqlE+MdyoeCcgXBW/Hi4DzMGEu+Os59dYc+LrO5vAjEieoU50
SsqfsHBoJo4SjtgoJdp0Q/dlaVlXuetCF5I/DWvhvJVrYuJBHIFjORTjkc6RZOOU
Ke+bBRjbxJFYcTDWlE8AHzssfIDCnYlDv9+pFv+JO+tCqxIhiOraVxq+sD60fJww
pExbjkZikhrRaqzzdLnYm0/ZDNzPS/UO+JSEZPFwu/pUGc7ztB/7+1PFf2oyftI=
=TEyG
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Right, there is a very small team of people that actually care about and 
try to maintain stable branches for all projects and when things are 
completely wedged, our first response is to get things unwedged as soon 
as possible.  This is generally because the longer you wait to get 
stable branches fixes, as soon as you fix the first problem there is a 
new problem and you're just constantly thrashing on digging out.


If there was a more community-wide concerted all-hands-on-deck effort 
when it comes to gate breakages and stable branch maintenance, then we 
would maybe be less prone to cap dependencies.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] CI for reliable live-migration

2015-08-26 Thread Matt Riedemann



On 8/26/2015 3:21 AM, Timofei Durakov wrote:

Hello,

Here is the situation: nova has live-migration feature but doesn't have
ci job to cover it by functional tests, only
gate-tempest-dsvm-multinode-full(non-voting, btw), which covers
block-migration only.
The problem here is, that live-migration could be different, depending
on how instance was booted(volume-backed/ephemeral), how environment is
configured(is shared instance directory(NFS, for example), or RBD used
to store ephemeral disk), or for example user don't have that and is
going to use --block-migrate flag. To claim that we have reliable
live-migration in nova, we should check it at least on envs with rbd or
nfs as more popular than envs without shared storages at all.
Here is the steps for that:

 1. make  gate-tempest-dsvm-multinode-full voting, as it looks OK for
block-migration testing purposes;


If it's been stable for awhile then I'd be OK with making it voting on 
nova changes, I agree it's important to have at least *something* that 
gates on multi-node testing for nova since we seem to break this a few 
times per release.



 2. contribute to tempest to cover volume-backed instances live-migration;


jogo has had a patch up for this for awhile:

https://review.openstack.org/#/c/165233/

Since it's not full time on openstack anymore I assume some help there 
in picking up the change would be appreciated.



 3. make another job with rbd for storing ephemerals, it also requires
changing tempest config;


We already have a voting ceph job for nova - can we turn that into a 
multi-node testing job and run live migration with shared storage using 
that?



 4. make job with nfs for ephemerals.


Can't we use a multi-node ceph job (#3) for this?



These steps should help us to improve current situation with
live-migration.

--
Timofey.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade][cinder] Updates of rootwrap filters

2015-08-26 Thread Eric Harney
On 08/26/2015 09:57 AM, Dulko, Michal wrote:
 Hi,
 
 Recently when working on a simple bug [1] I've run into a need to change 
 rootwrap filters rules for a few commands. After sending fix to Gerrit [2] it 
 turns out that when testing the upgraded cloud grenade haven't copied my 
 updated volume.filters file, and therefore failed the check. I wonder how 
 should I approach the issue:
 1. Make grenade script for Cinder to copy the new file to upgraded cloud.
 2. Divide the patch into two parts - at first add new rules, leaving the old 
 ones there, then fix the bug and remove old rules.
 3. ?
 
 Any opinions?
 
 [1] https://bugs.launchpad.net/cinder/+bug/1488433
 [2] https://review.openstack.org/#/c/216675/


I believe you have to go with option 1 and add code to grenade to handle
installing the new rootwrap filters.

grenade is detecting an upgrade incompatibility that requires a config
change, which is a good thing.  Splitting it into two patches will still
result in grenade failing, because it will test upgrading kilo to
master, not patch A to patch B.

Example for neutron:
https://review.openstack.org/#/c/143299/

A different example for nova (abandoned for unrelated reasons):
https://review.openstack.org/#/c/151408/



/me goes to investigate whether he can set the system locale to
something strange in the full-lio job, because he really thought we had
fixed all of the locale-related LVM parsing bugs by now.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mock 1.3 breaking all of Kilo in Sid (and other cases of this kind)

2015-08-26 Thread Thomas Goirand
On 08/25/2015 11:20 PM, Robert Collins wrote:
 This first happened with PBR. Kilo can't use = 1.x
 
 This is due to Kilo having *inappropriate* version caps on its
 dependencies. Which we've been busy unwinding and fixing
 infrastructure this cycle to avoid having it happen again on Liberty.

Thanks for doing this. I've seen *many* occurrences of what you call
defensive dependencies which I had a hard time dealing with.

 , and Liberty can't
 use = 1.x.
 
 This is because pbr 1.x offers features that Liberty needs. Thats how
 software moves forward: you add the feature, and someone else uses it
 and declares a dependency on your version.

Sure, no problem with this.

 So I can't upload PBR 1.3.0 to Sid. This has been dealt with
 because I am the maintainer of PBR, but really, it shouldn't have
 happen. How come for years, upgrading PBR always worked, and suddenly,
 when you start contributing to it, it breaks backward compat? I'm having
 a hard time to understand what's the need to break something which
 worked perfectly for so long. I'd appreciate more details.
 
 More of the ad hominens.

Robert, I'm sorry you take it this way. Sure, I was kind of frustrated
to see all the added work I have for dealing with issues with the newer
mock. Though I was writing to you directly as we know each other for a
long time. I didn't intend this as an attack.

 As I say above, its not a PBR problem. Its badly expressed defensive
 dependencies in kilo's runtime requirements. Fix that, and kilo will
 be happy with newer pbr.

That'd be too much work to patch all of kilo's requirements.txt
everywhere, I'm afraid I prefer to leave things as they are.

 But for mock, that's another story. I'm not the maintainer, and the one
 who is, decided it was a good moment to upload to Sid. The result is 9
 FTBFS (failures to build from source) so far, because mock = 1.1 is
 incompatible with Kilo (but does work well with Liberty, which
 *requires* it).
 
 Yes, Liberty requires it because we're porting to Python3.4 and up,
 and mock  1.1 is incompatible with Python3.4.

Oh !!!

 Anyway, the result is that mock 1.3 broke 9 packages at least in Kilo,
 currently in Sid [1]. Maybe, as packages gets rebuilt, I'll get more bug
 reports. This really, is a depressing situation. Now, as the package
 maintainer for the failed packages, I have 4 solutions:

 1/ Reassign these bugs to python-mock.
 2/ Remove all of the unit tests which are currently failing because of
 the new python-mock version. This isn't great, but as I already ran
 these tests with mock 1.0.1, it should be ok.
 3/ Completely remove unit tests for these Kilo packages (or at least
 allow them to fail).
 4/ See what's been done in Liberty to fix these tests with the newer
 version of mock, and backport that to Kilo.
 
 5/ update OpenStack in unstable to be Liberty

This isn't an option: I haven't tested Liberty b2 enough, and b3 is soon
out. I prefer to upload to Sid whatever is the latest stable, meaning I
prefer to keep Liberty in Experimental until the final release, and
leave the stable Kilo in Sid.

 6/ Build something in Debian to deal with  conflicting APIs of Python
 packages - we can do it with C ABIs (and do, all the time), but
 there's no infrastructure for doing it with Python. If we had that
 then Debian Python maintainers could treat this as a graceful
 transition rather than an awkward big-bang.

Even if we had such a thing (which would be great), we'd still have to
deal with transitions the way it's done in C, which is hugely painful.

 Unfortunately, 4/ isn't practical, because I'm also maintaining
 backports to Jessie, which means I'd have to write fixes so that the
 packages would work for both mock 1.0.1 and 1.3, plus it would take a
 very large amount of my time in a non-useful way (I know the package
 works as it passed unit tests with 1.0.1, so just fixing the tests is
 useless).
 
 One can't actually know that. Because one of the bugs in 1.0.1 is that
 many assertions appear to work even though they don't exist: tests are
 ilently broken with mock 1.0.1.

FYI, I'm going for the option of not running the failed tests because of
what you're explaining just above.

 So I'm left with either option 2/ and 3/. But really, I'd have preferred
 if mock didn't break things... :/

 Now, the most annoying one is with testtools (ie: #796542). I'd
 appreciate having help on that one.
 
 Twisted's latest releases moved a private symbol that testtools 
 unfortunately depends on.
 https://github.com/testing-cabal/testtools/pull/149 - and I just
 noticed now that Colin has added the test matrix we need, so we can
 merge this and get a release out this week.

Hum... this is for the latest testtools, right? Could you help me with
fixing testtools 0.9.39 in Sid, so that Kilo can continue to build
there? Or is this too much work?

 I hope the message is heard and that it wont happen again.
 
 I certainly hope we won't have an email thread like this again :)

Well, if I get a 

Re: [openstack-dev] [neutron][networking-ovn][tempest] devstack: gate-tempest-dsvm-networking-ovn failures in Openstack CI

2015-08-26 Thread Amitabha Biswas
With the recent commits it seems that the gate-tempest-dsvm-networking-ovn 
is succeeding more or less every time. The DBDeadlock issues still are 
seen on q-svc logs but are not frequent enough to cause ovsdb failures 
that were leading to the dsvm-networking failing before.

Once in a while a test fails for e.g. 
tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_two_subnets 
that failed recently in Jenkins. But I am pretty sure it will succeed if 
the suite is re-run.

Should the gate-tempest-dsvm-networking-ovn become voting at this time, 
and re-run/re-check if it fails in the Jenkins check?

Amitabha



From:   Russell Bryant rbry...@redhat.com
To: openstack-dev@lists.openstack.org
Date:   08/26/2015 06:24 AM
Subject:Re: [openstack-dev] [neutron][networking-ovn][tempest] 
devstack: gate-tempest-dsvm-networking-ovn failures in Openstack CI



On 08/25/2015 03:02 PM, Assaf Muller wrote:
 
 
 On Tue, Aug 25, 2015 at 2:15 PM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
 
 On 08/25/2015 01:26 PM, Amitabha Biswas wrote:
  Russell suggested removing the MYSQL_DRIVER=MySQL-python 
declaration
  from local.conf https://review.openstack.org/#/c/216413/which
 results in
  PyMySQL as the default.
 
  With the above change the above DB errors are no longer seen in my 
local
  setup,
 
 It's great to hear that resolved the errors you saw!
 
   1. Is there any impact of using PyMySQL for the Jenkins check and
 gates.
 
 As Jeremy mentioned, this is what everything else is using (and what 
OVN
 was automatically already using in OpenStack CI).
 
   2. Why is the gate-networking-ovn-python27**failing (the past
 couple of
  commits) in {0}
  
networking_ovn.tests.unit.test_ovn_plugin.TestOvnPlugin.test_create_port_security
  [0.194020s] ... FAILED. Do we need another conversation to 
track this?
 
 This is a separate issue.  The networking-ovn git repo has been 
pretty
 quiet the last few weeks and it seems something has changed that 
made
 our tests break.  We inherit a lot of base plugin tests from 
neutron, so
 it's probably some change in Neutron that we haven't synced with 
yet.  I
 haven't had time to dig into it yet.
 
 
 This patch was recently merged to Neutron:
 https://review.openstack.org/#/c/201141/
 
 Looks like that unit test is trying to create a port with an invalid MAC
 address.
 I pushed a fix here:
 https://review.openstack.org/#/c/216837/

Thanks, Assaf!  Much appreciated!  :-)

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Branching strategy vs feature freeze

2015-08-26 Thread Mike Scherbakov
Dmitry, thank you for getting our conversations and opinions spread across
different communications channels collected here in one cohesive email.

I like Ruslan's summary, and I support every line of what Ruslan has
written here.

 Note that the two weeks is not really a hard requirement (could be
 more, or less). In this model you need to come up with a release
 candidate, which is where we create the release branch, which becomes a
 stable branch at the end of the cycle.

Thierry - I appreciate your feedback. My thinking here is quite similar (if
not the same): time required for not merging new functionality should
depend on balance between amount of feature-related commits  bugfixes. If
there # of bugfixes  # feature-commits, then I'd say it's too expensive
to have two branches with the need to backport every single bugfix.

One of the things behind of this branching strategy is to change our
engineering processes in such a way, that we simply have more commits
related to features and less on bugfixes after FF date. In order to achieve
this, I'd expect us to be able to form strong bugfix team which will work
across release, so bugs are not collected by FF (and do other incremental
improvements).

We might want to consider making a flexible date when we create stable
branch, based on # of bugfixes vs # of feature-related commits. But
frankly, for simplicity, I'd pick the date and work towards it - so
everyone's expectations are aligned upfront.

Thanks,


On Wed, Aug 26, 2015 at 8:44 AM Ruslan Kamaldinov rkamaldi...@mirantis.com
wrote:

 On Wed, Aug 26, 2015 at 4:23 AM, Igor Marnat imar...@mirantis.com wrote:

 Thierry, Dmitry,
 key point is that we in Fuel need to follow as much community adopted
 process as we can, and not to introduce something Fuel specific. We
 need not only to avoid forking code, but also to avoid forking
 processes and approaches for Fuel.

 Both #2 and #3 allow it, making it easier for contributors to
 participate in the Fuel.

 #3 (having internal repos for pre-release preparation, bug fixing and
 small custom features) from community perspective is the same as #2,
 provided that all the code from these internal repos always ends up
 being committed upstream. Purely internal logistics.

 The only one additional note from my side is that we might want to
 consider an approach slightly adopted similar to what's there in
 Puppet modules. AFAIK, they are typically released several weeks later
 than Openstack code. This is natural for Fuel as it depends on these
 modules and packaging of Openstack.


 I also think we should go with option #2. What it means to me
 * Short FF: create stable branch couple of weeks after FF for upstream Fuel
 * Untie release schedule for upstream Fuel and MOS. This should be two
 separate schedules
 * Fuel release schedule should be more aligned with OpenStack release
 schedule. It should be similar to upstream OpenStack Puppet schedule,
 where Puppet modules are developed during the same time frame as OpenStack
 and released just a few weeks later
 * Internally vendors can have downstream branches (which is option  #3
 from Dmitry’s email)

 Thanks,
 Ruslan
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Documentation on how to Start Contributing

2015-08-26 Thread Victor Ryzhenkin
Wow!

And changed the plugin.sh file back to original. However, with a cleaned 
devstack (./unstack.sh, ./clean.sh, and removed /opt/stack) I still got the 
error I mentioned in my previous post. Full stack log is attached.

Looks like I’ve found this tricky one ;)

In your log:
2015-08-26 21:02:18.010 | + source /home/stack/devstack/extras.d/70-murano.sh 
stack post-config
2015-08-26 21:02:18.010 | ++ is_service_enabled murano
2015-08-26 21:02:18.012 | ++ return 0

And this one:
2015-08-26 21:02:41.481 | + [[ -f /opt/stack/murano/devstack/plugin.sh ]]
2015-08-26 21:02:41.481 | + source /opt/stack/murano/devstack/plugin.sh stack 
post-config

Murano tried to deploy multiple times. I think this happened because you using 
plugin and libs together. Need try to remove murano libs and extras from 
devstack directory(lib/murano; lib/murano-dashboard; extras.d/70-murano.sh) or 
turn off the plugin. We need to use one method in one time.

As per your suggestion I was going to test your first suggestion, but I was 
unable to find any murano service running on my server after the completion of 
./stack.sh (which I tested installed murano with it).
The command sudo service murano status returns murano: unrecognized service.
Am I missing something?
In devstack services starts not as daemons, but in screen. To get this 
processes view, you need to cd into your devstack dir and launch 
./rejoin-stack.sh.

To restart murano, need to move into murano-api and murano-engine screens, 
press CTRL-C, and using up arrow launch service, which was stopped.



-- 
Victor Ryzhenkin
Junior QA Engeneer
freerunner on #freenode

Включено 27 августа 2015 г. в 2:38:11, Vahid S Hashemian 
(vahidhashem...@us.ibm.com) написал:

Hi Victor,

Thank you very much for your detailed response. It was very helpful.

I tried the approach you suggested, and modified the local.conf file by adding 
these lines:

MURANO_REPO=/home/stack/workspace/murano
enable_plugin murano https://github.com/openstack/murano

And changed the plugin.sh file back to original. However, with a cleaned 
devstack (./unstack.sh, ./clean.sh, and removed /opt/stack) I still got the 
error I mentioned in my previous post. Full stack log is attached.



As per your suggestion I was going to test your first suggestion, but I was 
unable to find any murano service running on my server after the completion of 
./stack.sh (which I tested installed murano with it).
The command sudo service murano status returns murano: unrecognized service.
Am I missing something?

Thanks again for your assistance.

Regards,
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud Labs
Email: vahidhashem...@us.ibm.com
Phone: 1-408-463-2380   

IBM Silicon Valley Lab
555 Bailey Ave.
San Jose, CA 95141





From:        Victor Ryzhenkin vryzhen...@mirantis.com
To:        Vahid S Hashemian/Silicon Valley/IBM@IBMUS, OpenStack Development 
Mailing List (not for usage questions) openstack-dev@lists.openstack.org
Date:        08/25/2015 05:47 PM
Subject:        Re: [openstack-dev] [Murano] Documentation on how to Start 
Contributing



Hi, Vahid!


- Modified /home/stack/workspace/murano/devstack/plugin.sh based on Gosha's 
suggestion and 
replacedMURANO_REPO=${MURANO_REPO:-${GIT_BASE}/openstack/murano.git}withMURANO_REPO=/home/stack/workspace/murano.

Unfortunately, I’m not sure, that using local path as MURANO_REPO will work. At 
least, because I never heard about use cases like this.
But looks like you are given it a try. So, I suggest you to make sure, that you 
executed ./unstack and ./clean.sh scripts, before start deployment. If you 
using clean host, this one not needed.
Also, I think that is not needed to change plugin’s code. You can define 
MURANO_REPO=/home/stack/… in your localrc/local.conf file and use
enable_plugin murano https://github.com/openstack/murano»

What about suggestions, how to test your local changes.
I see a two easy ways to do it without using local repositories.
1. Deploy devstack with murano from master as is using plugin or libs and 
replace old files in /opt/stack/murano with new ones, that you changed. After 
this need to restart murano services.
2. Upload your changes to gerrit, and use as 
MURANO_REPO=https://review.openstack.org/openstack/muranoand 
MURANO_BRANCH=refs/changes/…/.../….

Both of methods are good.

But this errors on me when I run ./stack.sh:
       ERROR: openstack Conflict occurred attempting to store user - Duplicate 
Entry (HTTP 409) (Request-ID: req-805b487c-44fe-4155-8349-65362c2a34ee)
It will be really good, if you can give more information (I mean full, or last 
part of deployment log)

Best Regards!
--
Victor Ryzhenkin
Junior QA Engeneer
freerunner on #freenode

Включено 26 августа 2015 г. в 2:49:46, Vahid S Hashemian 
(vahidhashem...@us.ibm.com) написал:

OK. So I'm still having some issues with this.

Here's what I have done:

- Followed instructions on 

Re: [openstack-dev] [Murano] Documentation on how to Start Contributing

2015-08-26 Thread Vahid S Hashemian
Hi Victor,

Thanks for pointing out the issue with earlier deployment. Since I took 
your advice I don't run into that problem again.
And thanks for the pointer on how to restart murano daemons. I think I 
understand how to change murano code and test my changes locally.

I have one more question: if I want to make changes to murano-api code, 
the code does not seem to be inside the murano git repository, but under 
python-muranoclient.
But I don't see python-muranoclient files under my deployed devstack so I 
can modify and restart services. An example, would be 
python-muranoclient/muranoclient/v1/shell.py which does not seem to exist 
under /opt/stack.
Am I on the right track? If so, how do I test changes I want to make to 
the api code?

Thank you.

Regards,
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud Labs





From:   Victor Ryzhenkin vryzhen...@mirantis.com
To: Vahid S Hashemian/Silicon Valley/IBM@IBMUS
Cc: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:   08/26/2015 05:28 PM
Subject:Re: [openstack-dev] [Murano] Documentation on how to Start 
Contributing



Wow!

And changed the plugin.sh file back to original. However, with a cleaned 
devstack (./unstack.sh, ./clean.sh, and removed /opt/stack) I still got 
the error I mentioned in my previous post. Full stack log is attached.

Looks like I’ve found this tricky one ;)

In your log:
2015-08-26 21:02:18.010 | + source 
/home/stack/devstack/extras.d/70-murano.sh stack post-config
2015-08-26 21:02:18.010 | ++ is_service_enabled murano
2015-08-26 21:02:18.012 | ++ return 0

And this one:
2015-08-26 21:02:41.481 | + [[ -f /opt/stack/murano/devstack/plugin.sh ]]
2015-08-26 21:02:41.481 | + source /opt/stack/murano/devstack/plugin.sh 
stack post-config

Murano tried to deploy multiple times. I think this happened because you 
using plugin and libs together. Need try to remove murano libs and extras 
from devstack directory(lib/murano; lib/murano-dashboard; 
extras.d/70-murano.sh) or turn off the plugin. We need to use one method 
in one time.

As per your suggestion I was going to test your first suggestion, but I 
was unable to find any murano service running on my server after the 
completion of ./stack.sh (which I tested installed murano with it).
The command sudo service murano status returns murano: unrecognized 
service.
Am I missing something?
In devstack services starts not as daemons, but in screen. To get this 
processes view, you need to cd into your devstack dir and launch 
./rejoin-stack.sh.
To restart murano, need to move into murano-api and murano-engine screens, 
press CTRL-C, and using up arrow launch service, which was stopped.


-- 
Victor Ryzhenkin
Junior QA Engeneer
freerunner on #freenode

Включено 27 августа 2015 г. в 2:38:11, Vahid S Hashemian (
vahidhashem...@us.ibm.com) написал:
Hi Victor,

Thank you very much for your detailed response. It was very helpful.

I tried the approach you suggested, and modified the local.conf file by 
adding these lines:

MURANO_REPO=/home/stack/workspace/murano
enable_plugin murano https://github.com/openstack/murano

And changed the plugin.sh file back to original. However, with a cleaned 
devstack (./unstack.sh, ./clean.sh, and removed /opt/stack) I still got 
the error I mentioned in my previous post. Full stack log is attached.



As per your suggestion I was going to test your first suggestion, but I 
was unable to find any murano service running on my server after the 
completion of ./stack.sh (which I tested installed murano with it).
The command sudo service murano status returns murano: unrecognized 
service.
Am I missing something?

Thanks again for your assistance.

Regards,
- 

Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud Labs
Email: vahidhashem...@us.ibm.com
Phone: 1-408-463-2380

IBM Silicon Valley Lab
555 Bailey Ave.
San Jose, CA 95141





From:Victor Ryzhenkin vryzhen...@mirantis.com
To:Vahid S Hashemian/Silicon Valley/IBM@IBMUS, OpenStack 
Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:08/25/2015 05:47 PM
Subject:Re: [openstack-dev] [Murano] Documentation on how to Start 
Contributing



Hi, Vahid!


- Modified /home/stack/workspace/murano/devstack/plugin.sh based on 
Gosha's suggestion and replaced
MURANO_REPO=${MURANO_REPO:-${GIT_BASE}/openstack/murano.git}with
MURANO_REPO=/home/stack/workspace/murano.

Unfortunately, I’m not sure, that using local path as MURANO_REPO will 
work. At least, because I never heard about use cases like this.
But looks like you are given it a try. So, I suggest you to make sure, 
that you executed ./unstack and ./clean.sh scripts, before start 
deployment. If you using clean host, this one not needed.
Also, I think that is 

Re: [openstack-dev] [Murano] Documentation on how to Start Contributing

2015-08-26 Thread Victor Ryzhenkin
Hey! 

the code does not seem to be inside the murano git repository, but under 
python-muranoclient.
The code of murano-api located here [1]. The client is just a client for using 
murano api functions from python ;)

But I don't see python-muranoclient files under my deployed devstack
True. This happened because python-muranoclient was installed from PyPi. In 
this case You will find this directory in dist-packages in python dir.

So, I have a two suggestions for You. 

The first: To install non-released client with devstack you need to add into 
your local.conf/localrc the variable ‘LIBS_FROM_GIT=python-muranoclient’. After 
deploying devstack you will see the python-muranoclient dir and in system will 
installed the latest client from master. You can use 
MURANO_PYTHONCLIENT_REPO=/home/… to try to install client with devstack from 
your local repository(as you did for murano in last tries).

The second: But I recommend you to use python-virtualenv for client tests. 
Install venv, activate venv, and install changed client via pip install -e 
${changed_client_dir}.

I hope, this help you ;)

Best regards!

[1] https://github.com/openstack/murano/tree/master/murano/api

-- 
Victor Ryzhenkin
Junior QA Engeneer
freerunner on #freenode

Включено 27 августа 2015 г. в 3:56:56, Vahid S Hashemian 
(vahidhashem...@us.ibm.com) написал:

Hi Victor,

Thanks for pointing out the issue with earlier deployment. Since I took your 
advice I don't run into that problem again.
And thanks for the pointer on how to restart murano daemons. I think I 
understand how to change murano code and test my changes locally.

I have one more question: if I want to make changes to murano-api code, the 
code does not seem to be inside the murano git repository, but under 
python-muranoclient.
But I don't see python-muranoclient files under my deployed devstack so I can 
modify and restart services. An example, would be 
python-muranoclient/muranoclient/v1/shell.py which does not seem to exist under 
/opt/stack.
Am I on the right track? If so, how do I test changes I want to make to the api 
code?

Thank you.

Regards,
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud Labs  





From:        Victor Ryzhenkin vryzhen...@mirantis.com
To:        Vahid S Hashemian/Silicon Valley/IBM@IBMUS
Cc:        OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:        08/26/2015 05:28 PM
Subject:        Re: [openstack-dev] [Murano] Documentation on how to Start 
Contributing



Wow!

And changed the plugin.sh file back to original. However, with a cleaned 
devstack (./unstack.sh, ./clean.sh, and removed /opt/stack) I still got the 
error I mentioned in my previous post. Full stack log is attached.

Looks like I’ve found this tricky one ;)

In your log:
2015-08-26 21:02:18.010 | + source /home/stack/devstack/extras.d/70-murano.sh 
stack post-config
2015-08-26 21:02:18.010 | ++ is_service_enabled murano
2015-08-26 21:02:18.012 | ++ return 0

And this one:
2015-08-26 21:02:41.481 | + [[ -f /opt/stack/murano/devstack/plugin.sh ]]
2015-08-26 21:02:41.481 | + source /opt/stack/murano/devstack/plugin.sh stack 
post-config

Murano tried to deploy multiple times. I think this happened because you using 
plugin and libs together. Need try to remove murano libs and extras from 
devstack directory(lib/murano; lib/murano-dashboard; extras.d/70-murano.sh) or 
turn off the plugin. We need to use one method in one time.

As per your suggestion I was going to test your first suggestion, but I was 
unable to find any murano service running on my server after the completion of 
./stack.sh (which I tested installed murano with it).
The command sudo service murano status returns murano: unrecognized service.
Am I missing something?
In devstack services starts not as daemons, but in screen. To get this 
processes view, you need to cd into your devstack dir and launch 
./rejoin-stack.sh.

To restart murano, need to move into murano-api and murano-engine screens, 
press CTRL-C, and using up arrow launch service, which was stopped.



--
Victor Ryzhenkin
Junior QA Engeneer
freerunner on #freenode

Включено 27 августа 2015 г. в 2:38:11, Vahid S Hashemian 
(vahidhashem...@us.ibm.com) написал:

Hi Victor,

Thank you very much for your detailed response. It was very helpful.

I tried the approach you suggested, and modified the local.conf file by adding 
these lines:

MURANO_REPO=/home/stack/workspace/murano
enable_plugin murano https://github.com/openstack/murano

And changed the plugin.sh file back to original. However, with a cleaned 
devstack (./unstack.sh, ./clean.sh, and removed /opt/stack) I still got the 
error I mentioned in my previous post. Full stack log is attached.



As per your suggestion I was going to test your first suggestion, but I was 
unable to find any murano service running on my server after the completion 

Re: [openstack-dev] [Openstack-operators] PLEASE READ: VPNaaS API Change - not backward compatible

2015-08-26 Thread James Dempsey
On 26/08/15 23:43, Paul Michali wrote:
 James,
 
 Great stuff! Please see @PCM in-line...
 
 On Tue, Aug 25, 2015 at 6:26 PM James Dempsey jam...@catalyst.net.nz

SNIP

 1) Horizon compatibility

 We run a newer version of horizon than we do neutron.  If Horizon
 version X doesn't work with Neutron version X-1, this is a very big
 problem for us.

 
 @PCM Interesting. I always thought that Horizon updates lagged Neutron
 changes, and this wouldn't be a concern.
 

@JPD
Our Installed Neutron typically lags Horizon by zero or one release.  My
concern is how will Horizon version X cope with a point-in-time API
change?  Worded slightly differently: We rarely update Horizon and
Neutron at the same time so there would need to be a version(or
versions) of Horizon that could detect a Neutron upgrade and start using
the new API.  (I'm fine if there is a Horizon config option to select
old/new VPN API usage.)

 
 

 2) Service interruption

 How much of a service interruption would the 'migration path' cause?
 
 
 @PCM The expectation of the proposal is that the migration would occur as
 part of the normal OpenStack upgrade process (new services installed,
 current services stopped, database migration occurs, new services are
 started).
 
 It would have the same impact as what would happen today, if you update
 from one release to another. I'm sure you folks have a much better handle
 on that impact and how to handle it (maintenance windows, scheduled
 updates, etc).
 

@JPD This seems fine.

 
 We
 all know that IPsec VPNs can be fragile...  How much of a guarantee will
 we have that migration doesn't break a bunch of VPNs all at the same
 time because of some slight difference in the way configurations are
 generated?

 
 @PCM I see the risk as extremely low. With the migration, the end result is
 really just moving/copying fields from one table to another. The underlying
 configuration done to *Swan would be the same.
 
 For example, the subnet ID, which is specified in the VPN service API and
 stored in the vpnservices table, would be stored in a new vpn_endpoints
 table, and the ipsec_site_connections table would reference that entry
 (rather than looking up the subnet in the vpnservices table).
 

@JPD This makes me feel more comfortable; thanks for explaining.

 
 
 3) Heat compatibility

 We don't always run the same version of Heat and Neutron.

 
 @PCM I must admit, I've never used Heat, and am woefully ignorant about it.
 Can you elaborate on Heat concerns as may be related to VPN API differences?
 
 Is Heat being used to setup VPN connections, as part of orchestration?
 

@JPD
My concerns are two-fold:

1) Because Heat makes use of the VPNaaS API, it seems like the same
situation exists as with Horizon.  Some version or versions of Heat will
need to be able to make use of both old and new VPNaaS APIs in order to
cope with a Neutron upgrade.

2) Because we use Heat resource types like
OS::Neutron::IPsecSiteConnection [1], we may lose the ability to
orchestrate VPNs if endpoint groups are not added to Heat at the same time.


Number 1 seems like a real problem that needs a fix.  Number 2 is a fact
of life that I am not excited about, but am prepared to deal with.

Yes, Heat is being used to build VPNs, but I am prepared to make the
decision on behalf of my users... VPN creation via Heat is probably less
important than the new VPNaaS features, but it would be really great if
we could work on the updated heat resource types in parallel.

[1]
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::IPsecSiteConnection

 
 


 Is there pain for the customers beyond learning about the new API
 changes
 and capabilities (something that would apply whether there is backward
 compatibility or not)?


 See points 1,2, and 3 above.

 

 Another implication of not having backwards compatibility would be that
 end-users would need to immediately switch to using the new API, once the
 migration occurs, versus doing so on their own time frame.  Would this
 be a
 concern for you (customers not having the convenience of delaying their
 switch to the new API)?


 I was thinking that backward incompatible changes would adversely affect
 people who were using client scripts/apps to configure (a large number
 of)
 IPsec connections, where they'd have to have client scripts/apps
 in-place
 to support the new API.


 This is actually less of a concern.  We have found that VPN creation is
 mostly done manually and anyone who is clever enough to make IPsec go is
 clever enough to learn a new API/horizon interface.

 
 @PCM Do you see much reliance on tooling to setup VPN (such that having to
 update the tooling would be a concern for end-users), or is this something
 that could be managed through process/preparation?
 

@JPD  I see very little reliance on tooling to setup VPNs.  We could
manage this through preparation.

 
 


 Which is more of a logistics issue, and could be managed, IMHO.




 Would 

[openstack-dev] [Ironic][Nova] The process around the Nova Ironic driver

2015-08-26 Thread Michael Davies
Hey Everyone,

John Villalovos and I have been acting as the Nova-Ironic liaisons, which
mostly means dealing with bugs that have been raised against the Ironic
driver in Nova. So you can understand what we’ve been doing, and how you
can help us do that job better, we’re writing this email to clarify the
process we’re following.

Weekly Bug Scrub: Each week (Tuesday 2300 UTC) John and Michael meet to go
through the results of this query
https://bugs.launchpad.net/nova/+bugs?field.tag=ironicorderby=-idstart=0
to find bugs that we don’t know about, to see what progress has been
happening, and to see if there’s any direct action that needs to be taken.
We record the result of this triage over here
https://wiki.openstack.org/wiki/Nova-Ironic-Bugs

Fix Bugs: If we are able, and have the capacity, we try and fix bugs
ourselves. Where we need it, we seek help from both the Nova and/or Ironic
communities. But finding people to help fix bugs in the Nova Ironic driver
is probably an area we can do better at (*hint* *hint*)

Review Bugs: Once fixes are proposed, we solicit reviews for these fixes.
Once we’re happy that the proposed solution isn’t completely bonkers, one
of us will +1 that review, and add it to the list of Nova bugs that need
review by Nova:
https://etherpad.openstack.org/p/liberty-nova-priorities-tracking

Attend the Nova team meeting: One of us will attend the weekly IRC meeting
to represent the Ironic team interests within Nova. Nova might want to
discuss new requirements they have on drivers, or to discuss a bug that has
been raised, or to find out, or to communicate, which bugs the other team
feel are important to be addressed before the ever-looming next release.

Attend the Ironic team Meeting: John will attend the weekly IRC meeting to
raise any issues with the broader team that we become aware of.  It might
be that a new bug has been raised and we need to find someone willing to
take it on, or it may be that an existing bug with a proposed change is
languishing due to a lack of reviews (Michael can’t do that as 2:30am local
time is just a little wrong for an IRC meeting :)


So there it is, that's how the Ironic team are supporting the Ironic driver
in Nova.  If you have any questions, or just want to dive in and fix bugs
raised against the driver, you’re most welcome to get in touch - on IRC I’m
‘mrda’ and John is ‘jlvillal‘ :)

Michael…
-- 
Michael Davies   mich...@the-davies.net
Rackspace Cloud Builders Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] python-saharaclient release 0.11.0 planned for 8/31-9/1

2015-08-26 Thread Sergey Lukjanov
Hi folks,

due to the upcoming soft dependencies the 0.11.0 release of sahara client
is planned to early next week, so, please review not yet merged changes.

If there are any patches you that you think should definitely become part
of this release, please, ping me in IRC / mail.

Thanks.

P.S. it looks like this release will be the official Liberty sahara
client release.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon] Backward-incompatible changes to the Neutron API

2015-08-26 Thread Paul Michali
For details on the API/resource change, see the developer's reference
document that is under review: https://review.openstack.org/#/c/191944/

There is a section at the end, where I'm proposing a possible way to
support both versions of the API and provide backward compatibility. Feel
free to comment on the review.

Regards,

Paul Michali (pc_m)


On Wed, Aug 26, 2015 at 6:10 PM James Dempsey jam...@catalyst.net.nz
wrote:

 Greetings Heat/Horizon Devs,

 There is some talk about possibly backward-incompatible changes to the
 Neutron VPNaaS API and I'd like to better understand what that means for
 Heat and Horizon.

 It has been proposed to change Neutron VPNService objects such that they
 reference a new resource type called an Endpoint Group instead of
 simply a Subnet.

 Does this mean that any version of Heat/Horizon would only be able to
 support either the old or new Neutron API, or is there some way to allow
 a version of Heat/Horizon to support both?


 Thanks,
 James

 --
 James Dempsey
 Senior Cloud Engineer
 Catalyst IT Limited
 +64 4 803 2264
 --

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn][tempest] devstack: gate-tempest-dsvm-networking-ovn failures in Openstack CI

2015-08-26 Thread Amitabha Biswas

 Once in a while a test fails for e.g.
 tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_two_subnets
 that failed recently in Jenkins. But I am pretty sure it will succeed if
 the suite is re-run.
 
 Have you looked to see if the same test is failing for the regular
 neutron jobs?  Or does it seem to be OVN specific?

I have not seen this failure before in any of the other (around 30) ovn logs 
I’ve parsed. It was due to an existing IP Address in the create_subnet call 
(HTTP 409). OVN wasn’t in the stack trace.

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File tempest/api/network/test_dhcp_ipv6.py, line 188, in 
test_dhcpv6_two_subnets
  subnet_slaac = self.create_subnet(self.network, **kwargs)
File tempest/api/network/base.py, line 187, in create_subnet
  **kwargs)
File tempest/services/network/json/network_client.py, line 110, in 
create_subnet
  return self._create_resource(uri, post_data)
File tempest/services/network/json/network_client.py, line 72, in 
_create_resource
  resp, body = self.post(req_uri, req_post_data)
File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 256, in post
  return self.request('POST', url, extra_headers, headers, body)
File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 643, in request
  resp, resp_body)
File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 705, in _error_checker
  raise exceptions.Conflict(resp_body)
  tempest_lib.exceptions.Conflict: An object with that identifier already 
exists
  Details: {u'detail': u'', u'type': u'IpAddressInUse', u'message': 
u'Unable to complete operation for network 
a7b73f3f-37f2-4abc-8ea0-a22291f67b4e. The IP address 2003::f816:3eff:fe31:f72e 
is in use.’}

I don’t know the neutron answer yet, I will look into that.

Amitabha

 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] snapshot and cloning for NFS backend

2015-08-26 Thread Kekane, Abhishek
Hi Nikola,

Thank you for update.

Abhishek Kekane

-Original Message-
From: Nikola Đipanov [mailto:ndipa...@redhat.com] 
Sent: 27 August 2015 01:10
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder][nova] snapshot and cloning for NFS backend

On 07/28/2015 09:00 AM, Kekane, Abhishek wrote:
 Hi Devs,
 
  
 
 There is an NFS backend driver for cinder, which supports only limited 
 volume handling features. Specifically, snapshot and cloning
 features are missing.
 
  
 
 Eric Harney has proposed a feature of NFS driver snapshot [1][2][3], 
 which was approved on Dec 2014 but not implemented yet.
 
  
 
 [1] blueprint 
 https://blueprints.launchpad.net/cinder/+spec/nfs-snapshots
 
 [2] cinder-specs https://review.openstack.org/#/c/133074/  - merged 
 for Kilo but moved to Liberty
 
 [3] implementation https://review.openstack.org/#/c/147186/  - WIP
 
  
 
 As of now [4] nova patch is a blocker for this feature.
 
 I have tested this feature by applying [4] nova patch and it is 
 working as per expectation.
 
  
 
 [4] https://review.openstack.org/#/c/149037/
 

so [4] is actually related to the following bug (it is linked on the
review):

https://bugs.launchpad.net/nova/+bug/1416132

The proposed patch is, as was discussed in some details on the patch - not the 
right approach for several reasons.

I have added a comment on the bug [1] outlining what I think is the right 
solution here, however - it is far from a trivial change.

Let me know if the comment on the bug makes sense and if I need to add more 
information.

I will try to devote some time to fixing this, as I believe this is causing us 
a lot more problems in the gate on an ongoing basis (see [2] for example), but 
the discussion in the bug should be enough to get anyone else who may want to 
pick it up on the right path to making progress!

Best,
N.

[1] https://bugs.launchpad.net/nova/+bug/1416132/comments/8
[2] https://bugs.launchpad.net/nova/+bug/1445021


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] About logging-flexibility

2015-08-26 Thread Fujita, Daisuke
Hello, Ihar and Doug and Oslo team members,

I'm Daisuke Fujita,

The reason why I am writing this email to you is I'd like you to do a code 
review.
Please visit followings which I uploaded.

 https://review.openstack.org/#/c/216496/
 https://review.openstack.org/#/c/216506/
 https://review.openstack.org/#/c/216524/
 https://review.openstack.org/#/c/216551/

So, these are suggestion for the following etherpad and spec.
 https://etherpad.openstack.org/p/logging-flexibility
 https://review.openstack.org/#/c/196752/


Previously I talked with Ihar in Neutron-IRC[1] about the bug-report which I 
reported[2].

As a result of what we discussed, I took these over.

I'd like to merge this patch to Liberty-3, so, I'd appreciate it if you could 
cooperate.


Thank you for your cooperation.

Best Regards,
Daisuke Fujita

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2015-06-30.log.html

http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2015-08-19.log.html
(Please, find my IRC name Fdaisuke)

[2] https://bugs.launchpad.net/neutron/+bug/1466476



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Expected cli behavior when ovs-agent is down

2015-08-26 Thread Vikas Choudhary
Hi Shihanzhang,

Sorry for the late reply.
Later after asking for suggestion here on mailing list and while i was
exploring on how to fix this issue, bug was marked duplicate of
https://bugs.launchpad.net/neutron/+bug/1399249 . So i left its tracking.

And on your question, yes even after ovs-agent restart binding does not
recovers.

Thanks  Regars
Vikas Choudhary


hi Vikas Choudhary, when ovs-agent service recover(ovs-agent process
restart), the dhcp port will not re-binding successfully?





At 2015-08-22 14:26:08, Vikas Choudhary choudharyvikas16 at
gmail.com http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
wrote:

Hi Everybody,


I want to discuss on
https://bugs.launchpad.net/neutron/+bug/1348589.This is there for more
than a year and no discussion i could find on this.


Scenario:
ovs-agent is down and then a network and subnet under this newly
created network are created using cli. No error visible to user, but
following irregularities are found.


Discrepancies:
1. neutron port-show dhcp-port shows:
 binding:vif_type  | binding_failed
2. Running ovs-ofctl dump-flows br-tun, no of-flow got added
3. Running ovs-vsctl show br-int, no tag for dhcp-port.


neutron db will have all the attributes required to retry vif
binding.My query is when should we trigger this rebinding.Two
approaches i could think of are:
1 At neutron server restart, for all ports with vif_type as
binding_failed plugins/ml2/drivers/mech_agent.bind_port can be
invoked as a sync up activity.


2 In neutron port update api,
http://developer.openstack.org/api-ref-networking-v2-ext.html , could
be enhanced to receive vif binding related options also and then
eventually plugins/ml2/drivers/mech_agent.bind_port can be
invoked.Corresponding changes will be made to 'port update cli' also.


Please suggest.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-dev] [Containers] Magnum bay-create is getting stuck at CREATE_IN_PROGRESS

2015-08-26 Thread Vikas Choudhary
Thanks Ganesh and Adrian.

I got it resolved.
Reason was that from inside master node openstack services were not
reachable.Openstack services were running on 127.0.0.1 as i had set HOST_IP
as 127.0.0.1 in localrc.As master could reach public network only and
127.0.01 was not reachable.I restacked by setting HOST_IP in localrc as
public network interface ip of host machine.Then  bay creation got
completed successfully.


Thanks
Vikas Choudhary

___

Hi Vikas,

Please debug on these lines:

- are you able to launch the instance based on the fedora-21-atomic-3
image in horizon ?  If not, check if the image download was fine (by
comparing the size)
- get the nova list output (your should see kube_master and
kube-minion up and running)
-  Log into the console of these 2 instances from horizon and check if
it has booted up fine and if its in login prompt (I have seen an issue
where it got stuck at some point during boot and I had to reload
   the instance)
- After this, cloud-init should start and complete
- get the output of heat stack-list –n”

Thanks,
Ganesh

From: Adrian Otto adrian.otto at rackspace.com
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devmailto:adrian.otto
at rackspace.com
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply-To: OpenStack Development Mailing List (not for usage
questions) openstack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devmailto:openstack-dev
at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Date: Wednesday, 26 August 2015 10:51 am
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devmailto:openstack-dev
at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Subject: Re: [openstack-dev] [Containers] Magnum bay-create is getting
stuck at CREATE_IN_PROGRESS

Vikas Choudhary,

Try heat event-show to get more information about what's happening in
the creation of the resource group for the k8s master. You might not
have enough storage free to create the nova VM to run the bay master.

Regards,

Adrian

On Aug 25, 2015, at 9:52 PM, Vikas Choudhary choudharyvikas16 at
gmail.com 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devmailto:choudharyvikas16
at gmail.com 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
wrote:


I am following 
https://github.com/openstack/magnum/blob/master/doc/source/dev/dev-quickstart.rst#using-kubernetes
to try containers/magnum.After running magnum bay-create --name
k8sbay --baymodel k8sbaymodel --node-count 1 , it keeps showing :
root at PRINHYLTPHP0400
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev:/home/devstack/devstack#
magnum bay-list
+--+++--++
| uuid | name | node_count | master_count | status |
+--+++--++
| e121254f-8bca-497b-9bd9-e9f37305592e | k8sbay | 1 | 1 |
CREATE_IN_PROGRESS |
root at PRINHYLTPHP0400
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev:/home/devstack/devstack#
heat resource-list ${BAY_HEAT_NAME}

+---+-+--++-+
| resource_name | physical_resource_id | resource_type |
resource_status | updated_time |
+---+-+--++-+
| api_pool | 23694171-b787-4c15-9188-3c693e3702c8 | OS::Neutron::Pool
| CREATE_COMPLETE | 2015-08-26T04:09:00 |

|* api_pool_floating | 6410a233-03fe-451e-9b52-cd9fac1fcf31 |
OS::Neutron::FloatingIP | CREATE_COMPLETE | 2015-08-26T04:09:00 |
*
|* etcd_monitor | 2a5c325c-7f24-4c82-a252-64211b67d195 |
OS::Neutron::HealthMonitor | CREATE_COMPLETE | 2015-08-26T04:09:00 |
*
|* etcd_pool | 4578db12-5a88-4b1e-afe2-4f7f90bcbad1 |
OS::Neutron::Pool | CREATE_COMPLETE | 2015-08-26T04:09:00 |
*
*| kube_masters | 8ba9d12f-3567-4d54-969c-99077818ffa3 |
OS::Heat::ResourceGroup |CREATE_IN_PROGRESS | 2015-08-26T04:09:00 |*

|* kube_minions | | OS::Heat::ResourceGroup | INIT_COMPLETE |
2015-08-26T04:09:00 |
*
|* api_monitor | 09a569ca-334a-4576-91e2-fa98a30f3e50 |
OS::Neutron::HealthMonitor | CREATE_COMPLETE | 2015-08-26T04:09:01 | |
extrouter | f7fb19db-2be7-4624-bf20-4867e1d7572c | OS::Neutron::Router
| CREATE_COMPLETE | 2015-08-26T04:09:01 | | extrouter_inside |
f7fb19db-2be7-4624-bf20-4867e1d7572c:subnet_id=fdc539a0-9ce7-4faa-a6e1-bddd83d99df9
| OS::Neutron::RouterInterface | CREATE_COMPLETE | 2015-08-26T04:09:01
|
*
|* fixed_network 

[openstack-dev] [openstack-infra][third-party][CI][nodepool] Uploading images to nodepool.

2015-08-26 Thread Abhishek Shrivastava
Hi Folks,

I am following Ramy's new guide for setting up the CI. Till now I have
installed master and created the slave node image using [1]. Now I want to
upload the image to nodepool, so can I use [2] to do so, or is there any
other way also to do so.


   - Also is there any other changes that need to be done, in vars.sh [3]
   and nodepool.yaml [4]?
   - And do I need to reinstall the master after setting up the changes in
   nodepool.yaml and vars.sh?


Please enlighten me with your ideas folks.

[1]
https://github.com/openstack-infra/project-config/tree/master/nodepool/elements#using-diskimage-builder-to-build-devstack-gate-nodes
[2] *nodepool image-upload all image-name*
[3]
https://github.com/rasselin/os-ext-testing-data/blob/master/vars.sh.sample#L17
[4]
https://github.com/rasselin/os-ext-testing-data/blob/master/etc/nodepool/nodepool.yaml.erb.sample

-- 


*Thanks  Regards,*
*Abhishek*
*Cloudbyte Inc. http://www.cloudbyte.com*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CLI and RHEL registration of overcloud nodes

2015-08-26 Thread Jan Provazník

Hi,
although rdomanager-oscplugin is not yet under TripleO it should be 
soon, so sending this to TripleO audience.


Satellite registration from user's point of view is now done by passing 
couple of specific parameters when running openstack overcloud deploy 
command [1]. rdomanager-oscplugin checks presence of these params and 
adds additional env files which are then passed to heat and it also 
generates a temporary env file containing default_params required for 
rhe-registration template [2].


This approach is not optimal because it means that:
a) registration params have to be passed on each call of openstack 
overcloud deploy
b) other CLI commands (pkg update, node deletion) should implement/reuse 
the same logic (support same parameters) to be consistent


This is probably not necessary because registration params should be 
needed only when creating OC stack, no need to pass them later when 
running any update operation.


As a short term solution I think it would be better to pass registration 
templates in the same way as other extra files (-e param) - although 
user will still have to pass additional parameter when using 
rhel-registration, it will be consistent with the way how e.g. network 
configuration is used and the -e mechanism for passing additional env 
files is already supported in other CLI commands. 
_create_registration_env method [2] would be updated to generateadd 
just user's credentials [3] env file - and this would be needed only 
when creating overcloud, no need to pass them when updating stack later.


If there are no objections/feedback, I'll send a patch (and of course 
update documentation too) which updates CLI as described above.


Jan


[1] 
https://repos.fedorapeople.org/repos/openstack-m/docs/master/basic_deployment/basic_deployment_cli.html
[2] 
https://github.com/rdo-management/python-rdomanager-oscplugin/blob/master/rdomanager_oscplugin/v1/overcloud_deploy.py#L366
[3] 
https://github.com/rdo-management/python-rdomanager-oscplugin/blob/master/rdomanager_oscplugin/v1/overcloud_deploy.py#L378


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] Timeout waiting for ssh access

2015-08-26 Thread Asselin, Ramy
Hi Abhi,

First, using DIB is generally easier. However to use the scripts, you configure 
nodepool to do so.

You should already have the scripts directory set [1]
Then you configure the script as the starting point for the image-build via 
setup: prepare_node.sh [2]
Be sure to correctly configure the credentials, ssh keys, etc. as well.

You can use nodepool image-update command to manually invoke it image builds 
using this option.

Ramy

[1] http://docs.openstack.org/infra/nodepool/configuration.html#script-dir
[2] http://docs.openstack.org/infra/nodepool/configuration.html#images




From: Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
Sent: Tuesday, August 25, 2015 7:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [third-party] Timeout waiting for ssh access

Hi Ramy,

Can you mention the steps for glance  scripts?

On Tue, Aug 25, 2015 at 7:49 PM, Asselin, Ramy 
ramy.asse...@hp.commailto:ramy.asse...@hp.com wrote:
Hi Tang,

I haven’t seen this issue. Which approach are you using to build the image? DIB 
or via glance  scripts?
Do you get the same result when using both approaches?
If using DIB, what is the OS used to build the image?

Ramy

From: Tang Chen [mailto:tangc...@cn.fujitsu.commailto:tangc...@cn.fujitsu.com]
Sent: Tuesday, August 25, 2015 5:02 AM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [third-party] Timeout waiting for ssh access

Hi all,

Does anybody have any idea about this problem ?

Since ubuntu does not have /etc/sysconfig/network-scripts/ifcfg-*,
obviously it is a fedora like fs structure, we have tried to use centos,
but we still got the same error.

Thanks.
On 08/24/2015 09:19 PM, Xie, Xianshan wrote:
Hi, All
  I`m still strugling to setup nodepool env, and i got following error messages:
--
ERROR nodepool.NodeLauncher: Exception launching node id: 13 in provider: 
local_01 error:
  Traceback (most recent call last):
File /home/fujitsu/xiexs/nodepool/nodepool/nodepool.py, line 405, in 
_run
  dt = self.launchNode(session)
File /home/fujitsu/xiexs/nodepool/nodepool/nodepool.py, line 503, in 
launchNode
  timeout=self.timeout):
File /home/fujitsu/xiexs/nodepool/nodepool/nodeutils.py, line 50, in 
ssh_connect
  for count in iterate_timeout(timeout, ssh access):
File /home/fujitsu/xiexs/nodepool/nodepool/nodeutils.py, line 42, in 
iterate_timeout
  raise Exception(Timeout waiting for %s % purpose)
  Exception: Timeout waiting for ssh access
 WARNING nodepool.NodePool: Deleting leaked instance d-p-c-local_01-12 
(aa6f58d9-f691-4a72-98db-6add9d0edc1f) in local_01 for node id: 12
--

And meanwhile, in the console.log which records the info for launching this 
instance,
there is also an error as follows:
--
+ sed -i -e s/^\(DNS[0-9]*=[.0-9]\+\)/#\1/g 
/etc/sysconfig/network-scripts/ifcfg-*^M
sed: can't read /etc/sysconfig/network-scripts/ifcfg-*: No such file or 
directory^M
...
cloud-init-nonet[26.16]: waiting 120 seconds for network device
--

I have tried to figure out what`s causing this error:
1. mounted image.qcow2 and then checked the configuration for the network about 
this instance:
$ cat etc/network/interfaces.d/eth0.cfg
   auto eth0
   iface eth0 inet dhcp

$ cat etc/network/interfaces
   auto lo
   iface lo inet loopback
   source /etc/network/interfaces.d/*.cfg

It seems good.

2. But indeed, the directory named /etc/sysconfig/network-scripts/ifcfg-* does 
not exist.
And i dont understand why it attempts to check this configuration file?
Because my instance is specified to ubuntu not rhel.

So,could you give me some tips to work this out?
Thanks in advance.

Xiexs



__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
[https://docs.google.com/uc?export=downloadid=0Byq0j7ZjFlFKV3ZCWnlMRXBCcU0revid=0Byq0j7ZjFlFKa2V5VjdBSjIwUGx6bUROS2IrenNwc0kzd2IwPQ]
Thanks  Regards,
Abhishek
Cloudbyte Inc.http://www.cloudbyte.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [fuel] Branching strategy vs feature freeze

2015-08-26 Thread Ruslan Kamaldinov
On Wed, Aug 26, 2015 at 4:23 AM, Igor Marnat imar...@mirantis.com wrote:

 Thierry, Dmitry,
 key point is that we in Fuel need to follow as much community adopted
 process as we can, and not to introduce something Fuel specific. We
 need not only to avoid forking code, but also to avoid forking
 processes and approaches for Fuel.

 Both #2 and #3 allow it, making it easier for contributors to
 participate in the Fuel.

 #3 (having internal repos for pre-release preparation, bug fixing and
 small custom features) from community perspective is the same as #2,
 provided that all the code from these internal repos always ends up
 being committed upstream. Purely internal logistics.

 The only one additional note from my side is that we might want to
 consider an approach slightly adopted similar to what's there in
 Puppet modules. AFAIK, they are typically released several weeks later
 than Openstack code. This is natural for Fuel as it depends on these
 modules and packaging of Openstack.


I also think we should go with option #2. What it means to me
* Short FF: create stable branch couple of weeks after FF for upstream Fuel
* Untie release schedule for upstream Fuel and MOS. This should be two
separate schedules
* Fuel release schedule should be more aligned with OpenStack release
schedule. It should be similar to upstream OpenStack Puppet schedule, where
Puppet modules are developed during the same time frame as OpenStack and
released just a few weeks later
* Internally vendors can have downstream branches (which is option  #3 from
Dmitry’s email)

Thanks,
Ruslan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] multiple cascade services

2015-08-26 Thread joehuang
Hello,

As what we discussed in the yesterday’s meeting, the contradict is how to scale 
out cascade services.


1)  In PoC, one proxy node will only forward to one bottom openstack, the 
proxy node will be added to a regarding AZ, and multiple proxy nodes for one 
bottom OpenStack is feasible by adding more proxy nodes into this AZ, and the 
proxy node will be scheduled like usual.



Is this perfect? No. Because the VM’s host attribute is binding to a specific 
proxy node, therefore, these multiple proxy nodes can’t work in cluster mode, 
and each proxy node has to be backup by one slave node.



2)  The fake node introduced in the cascade service.

Because fanout rpc call for Neutron API is assumed, then no multiple fake nodes 
for one bottom openstack is allowed.

And because the traffic to one bottom OpenStack is un-predictable, and move 
these fake nodes dynamically among cascade service is very complicated, 
therefore we can’t deploy multiple fake nodes in one cascade service.

At last, we have to deploy one fake node one cascade service.

And one cascade service one bottom openstack will limit the burst traffic to 
one cascade openstack.

And you have to backup the cascade service.



3)  From the beginning, I prefer to run multiple cascade service in 
parallel, and all of them work in load balance cluster mode.
API of (Nova, Cinder, Neutron… ) calling cascade service through RPC, and the 
RPC call will be only forwarded to one of the cascade service ( just put the 
RPC to message bus queue, and if one of the cascade service pick up the 
message, the message will be remove from the queue, and will not be consumed by 
other cascade service ). When the cascade service received a message, will 
start a task to execute the request. If multiple bottom openstacks will be 
involved, for example, networking, then the networking request will be 
forwarded to regarding multiple bottom openstack where there is resources (VM, 
floating IP)  resides ).

To keep the correct order of operations, all tasks will store necessary data in 
DB to prevent the operation be broken for single site. (if a VM is creating, 
reboot is not allowed, such kind of use cases has already been done on API of 
(Nova.Cinder,Neutron,…) side )

Through this way, we can dynamically add cascade service node, and balance the  
traffic dynamically.


Best Regards
Chaoyi Huang ( Joe Huang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] About permission of cinder.conf

2015-08-26 Thread liuxinguo
Hi,

Why the permission of cinder.conf is set to 640 not 600? Can we set it to 600 
instead of 640 and is there any problems if we change it?

Any input will be appreciate, thanks!

Wilson Liu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] dependencies problem on different release

2015-08-26 Thread Gareth
Btw, it's not a dependency conflict issue. If we install python
dependencies via pip, it's okay to be with foo=1.5.0 in the past, but not
now maybe (oslo.util - oslo_util breaks nearly everything). Maybe we need
a requirements.txt as release like:

foo==1.5.0
bar==2.1.0

not

for=1.5.0
bar=2.0.0

On Thu, Aug 27, 2015 at 3:32 AM, Robert Collins robe...@robertcollins.net
wrote:

 On 27 August 2015 at 02:00, Gareth academicgar...@gmail.com wrote:
  Hey guys,
 
  I have a question about dependencies. There is an example:
 
  On 2014.1, project A is released with its dependency in requirements.txt
  which contains:
 
  foo=1.5.0
  bar=2.0.0,2.2.0
 
  and half a year later, the requirements.txt changes to:
 
  foo=1.7.0
  bar=2.1.0,2.2.0
 
  It looks fine, but potential change would be upstream version of package
 foo
  and bar become 2.0.0 and 3.0.0 (major version upgrade means there are
  incompatible changes).
 
  For bar, there will be no problems, because 2.2.0 limit the version
 from
  major version changes. But with 2.0.0 foo, it will break the
 installation of
  2014.1 A, because current development can't predict every incompatible
  changes in the future.

 Correct. But actually bar is a real problem for single-instance binary
 distributors - like Debian family distributions - where only one
 version of bar can be in the archive at once. For those distributions,
 when bar 3 comes out, they cannot add it to the archive until a new
 release of project A happens (or they break project A). They also
 can't add anything to the archive that depends on bar  2.2.0, because
 they can't add bar. So it leads to gridlock. We are now avoiding
 adding and won't except in exceptional circumstances, such defensive
 upper bounds to OpenStack's requirements. When we /know/ that the
 thing is broken we may - if we can't get it fixed.

  A real example is to enable Rally for OpenStack Juno. Rally doesn't
 support
  old release officially but I could checkout its codes to the Juno release
  date which make both codes match. However even if I use the old
  requirements.txt to install dependencies, there must be many packages are
  installed as upstream versions and some of them breaks. An ugly way is to
  copy pip list from old Juno environment and install those properly. I
 hope
  there are better ways to do this work. Anyone has smart ideas?

 As Boris says, use virtualenvs. They'll solve 90% of the pain.

 -Rob


 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Gareth

*Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
*OpenStack contributor, kun_huang@freenode*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] DAL progress

2015-08-26 Thread Vega Cai
Hi folks,

As we discussed before, DAL needs to provide API to access resources in the
top and bottom OpenStack, so I implemented a client wrapper which has API
like this:

list_servers(self, cxt, site, fileters)

cxt is the context object storing authorization information, site tells DAL
where to send the REST request.

We have a siteserviceconfiguration table, and we have three choices how
to use it.

1. We don't have an admin account, and we don't store service
catalog(returned from Keystone when getting token, containing endpoint
information) in cxt, then there's no way we can retrieve endpoints from
Keystone, so we need to query endpoints from siteserviceconfiguration
table. User needs to register site-service mapping via cascade service API,
and if endpoints are updated in Keystone, user needs to update the mapping.

2. We don't have an admin account but we use service catalog, then cascade
service needs to fill cxt with service catalog, so DAL can obtain endpoints
from cxt. Endpoint update has no impact since we always have the newest
endpoint information.

3. We have an admin account, then we can retrieve endpoints via
endpoint-list.
(1) Use siteserviceconfiguration table as cache, then DAL needs to update
this table when client fails to connect services(endpoints may be updated)
(2) No cache, then endpoint update again has no impact.

BR
Zhiyuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev