Re: [openstack-dev] [neutron] till when must code land to make it in liberty

2015-07-31 Thread Armando M.
On 31 July 2015 at 20:33, Paul Carver pcar...@paulcarver.us wrote:

 On 7/31/2015 9:47 AM, Kyle Mestery wrote:

 However, it's reasonable to assume the later you propose your RFE bug, the
 less of a chance it has of making it. We do enforce the Feature Freeze
 [2],
 which is the week of August 31 [3]. Thus, effectively you have 4 weeks to
 submit patches for new features.


 Does the feature freeze apply to big tent work? I certainly think we
 should try to stick as close to Neutron process as possible, but I'm
 wondering if we need to consider August 31 a hard deadline for the
 networking-sfc work.

 I suspect we won't be feature complete by the 31st, we will probably need
 to work well into September in order to ensure that we have something with
 all the necessary parts working.


Technically speaking the projects under the neutron folder have an
independent release schedule [1,2]; so you could go past the deadline, if
need be.

[1] http://governance.openstack.org/reference/tags/release_independent.html
[2]
http://governance.openstack.org/reference/projects/neutron.html#project-neutron




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Let's talk about API versions

2015-07-31 Thread Devananda van der Veen
It sounds like we all agree -- the client we ship should default to a
fixed, older version. Anyone who wants newer functionality can pass a newer
version to their client.

Here's the current state of things:

server:
- stable/kilo: 1.6
- current: 1.11

client:
- stable/kilo: 1.6
- latest release (0.7.0): 1.6
- current: 1.9

So -- since we haven't released a client that sends a header  1.6, I
propose that we set the client back to sending the 1.6 header right away.
While having the client default to 1.1 would be ideal, this should still
keep the Jackson the Absent of the world as happy as reasonably possible
moving forward without breaking anyone that is packaging Kilo already.

(yes, this may affect Olivia the Contributor, but that's OK because Olivia
will have read this email :) )

-Deva


On Fri, Jul 31, 2015 at 2:50 PM Jim Rollenhagen j...@jimrollenhagen.com
wrote:

 On Fri, Jul 31, 2015 at 02:37:52PM -0700, Clint Byrum wrote:
  Excerpts from Sean Dague's message of 2015-07-31 04:14:54 -0700:
   On 07/30/2015 04:58 PM, Devananda van der Veen wrote:
   snip
Thoughts?
   
* I'm assuming it is possible to make micro version changes to
 the
1.x API
  as 1.10.1, 1.10.2,etc
   
   
Despite most folks calling this microversions, I have been trying
 to
simply call this API version negotiation.
   
To your question, no -- the implementations by Nova and Ironic, and
 the
proposal that the API-WG has drafted [1], do not actually support
MAJOR.MINOR.PATCH semantics.
   
It has been implemented as a combination of an HTTP request to
http(s)://server URL/major/resource URI plus a
header X-OpenStack-service-API-Version: major.minor.
   
The major version number is duplicated in both the URI and the
 header,
though Ironic will error if they do not match. Also, there is no
 patch
or micro version.
   
So, were we to change the major version in the header, I would
 expect
that we also change it in the URL, which means registering a new
endpoint with Keystone, and, well, all of that.
  
   Right, it's important to realize that the microversion mechanism is not
   semver, intentionally. It's inspired by HTTP content negotiation, as
   Deva said. I wrote up a lot of the rational for the model in Nova here,
   which the Ironic model is based off of -
   https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/
  
 
  Thanks Sean, this post was exactly what I needed to understand the
  inspiration behind the current situation.
 
   Ironic is a little different. It's entirely an admin API. And most
 users
   are going to only talk to an Ironic that they own the deployment
   schedule on. So the multi cloud that you don't own concern might not be
   there. But, it would also be confusing to all users if Ironic goes down
   a different path with microversions, and still calls it the same thing.
  
 
  I think being single-tenant makes the impact of changes different,
  however the solution can be the same. While tools that use Ironic may
  not be out in the wild as much from an operator perspective, there will
  be plenty of tools built to the Ironic API that will want to be
  distributed to users of various versions of Ironic.
 
  It sounds to me like for Ironic, the same assumption should be made as
  in the outlined Jackson the Absent solution. Assume no version is old
  version, and require specifying the new version to get any new behavior.
 
  What is preventing Ironic from embracing that?

 So, this is actually how the Ironic API behaves. However, it was at some
 point decided that the client should have a more recent default version
 (which is the main topic for this thread).

 I agree with you; I think this is the best route.

 // jim

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] till when must code land to make it in liberty

2015-07-31 Thread Kyle Mestery
On Fri, Jul 31, 2015 at 6:29 AM, Andreas Scheuring 
scheu...@linux.vnet.ibm.com wrote:

 Hi,
 as there is no official feature freeze in neutron anymore, there still
 must be a cut off date or at least a cut off time frame for liberty
 code.

 Can anyone tell me when this is (round about)? Is this liberty-3?

 I wonder if it's still possible to propose a RFE and get it into
 liberty...


You are correct, we do not enforce deadlines anymore for specs/RFEs ([1],
second paragraph).

However, it's reasonable to assume the later you propose your RFE bug, the
less of a chance it has of making it. We do enforce the Feature Freeze [2],
which is the week of August 31 [3]. Thus, effectively you have 4 weeks to
submit patches for new features.

Thanks!
Kyle

[1] http://docs.openstack.org/developer/neutron/policies/blueprints.html
[2] https://wiki.openstack.org/wiki/FeatureFreeze
[3] https://wiki.openstack.org/wiki/Liberty_Release_Schedule


 Thanks!

 --
 Andreas
 (IRC: scheuran)



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [all][infra] CI System is broken

2015-07-31 Thread Jeremy Stanley
On 2015-07-31 14:49:52 +0800 (+0800), Gareth wrote:
 Could this issue be fixed today?

I believe it is, now that we've restarted with
https://review.openstack.org/207675 applied.

 Btw, is it possible to design a special mode for gate/zuul? If ops
 switch to that mode, all new gerrit event can't trigger any
 jenkins job.

I must not be understanding what you're describing, since it sounds
exactly like Zuul's existing graceful shutdown behavior and also
because I have no idea how that would help this situation at all.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [all][infra] CI System is broken

2015-07-31 Thread Joshua Hesketh
On Fri, Jul 31, 2015 at 10:29 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-07-31 14:49:52 +0800 (+0800), Gareth wrote:
  Could this issue be fixed today?

 I believe it is, now that we've restarted with
 https://review.openstack.org/207675 applied.

  Btw, is it possible to design a special mode for gate/zuul? If ops
  switch to that mode, all new gerrit event can't trigger any
  jenkins job.

 I must not be understanding what you're describing, since it sounds
 exactly like Zuul's existing graceful shutdown behavior and also
 because I have no idea how that would help this situation at all.


I think the suggestion might be to not add anything new to a queue when
zuul is in a particular mode. In other words to give zuul a chance to catch
up.

However this is difficult because zuul will eventually have to run for
those changes at some point, so it may as well queue them immediately and
get to them when it can.

Cheers,
Josh


 --
 Jeremy Stanley

 ___
 OpenStack-Infra mailing list
 openstack-in...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest]No way to skip S3 related tests

2015-07-31 Thread Matthew Treinish
On Fri, Jul 31, 2015 at 12:47:55PM +0200, Jordan Pittier wrote:
 Hi,
 With the commit [1] minimize the default services that happened in April,
 nova-objectstore is not run by default. Which means that by default,
 Devstack doesnt provide any S3 compatible API (because swift3 is not
 enabled by default, of course).

Do those tests even work against swift3? I guess there isn't a reason they
shouldn't IIRC they just use boto to make s3 api calls.

 
 Now, I don't see any config flag or mechanism in Tempest to skip S3 related
 tests. So, out of the box, we can't have a full green Tempest run.
 
 Note that there is a Tempest config flag compute_feature_enabled.ec2_api.
 And there's also a mechanism implemented in 2012 by afazekas (see [2]) that

So frankly the method we have for skipping the ec2 tests in [2] is kinda insane,
it's opaque and confusing, which is why we added the new flag to try and make it
clear. Although, I'm still think there might be issues in the skip logic code
around ec2.

 tried to skip S3 tests if an HTTP connection to  'boto.s3_url' failed with
 a NetwordError, but that mechanism doesnt work anymore: the tests are not
 properly skipped.
 
 I'd like your opinion on the correct way to fix stuff:
 1) Either introduce a object_storage_feature_enabled.s3_api flag in Tempest
 and skip S3 tests if the value is False. This requires an additionnal
 patch to devstack to properly set the value of
 object_storage_feature_enabled.s3_api flag

I think this is the right way forward, at least in the short term. If the nova
s3 implementation is really decoupled from the ec2 api then we should handle
those separately.

 
 2) Or, try to fix the mechanism in tempest/thirdparty/boto/test.py that
 auto-magically skips the S3 tests on NetworkError.

This is definitely not an option, what if there was just a misconfigured
deployment that caused a NetworkError to be raised but the intent was to have
S3 enabled? We shouldn't skip anything automagically in tempest. [3][4]

 
 What do you think ?

Honestly, removing the thirdparty tests dir from tempest is something I've 
wanted
to do for a long time. They really don't fit into the scope of the project, and
at this point are more a headache then anything else. I think if we prioritized
moving those into a tempest plugin and removing them from the tree it would make
things a lot better.

 
 Jordan
 
 
 [1]
 https://github.com/openstack-dev/devstack/commit/279cfe75198c723519f1fb361b2bff3c641c6cef
 [2]
 https://github.com/openstack/tempest/commit/a23f500725df8d5ae83f69eb4da5e47736fbb647#diff-ea760d854610bfed1ae3daa4ac242f74R133

[3] http://docs.openstack.org/developer/tempest/REVIEWING.html#being-explicit
[4] http://docs.openstack.org/developer/tempest/HACKING.html#skipping-tests


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Release of python-fuelclient today

2015-07-31 Thread Roman Prykhodchenko
Привет, нет, обычный how to contribute.

 31 лип. 2015 о 15:01 Roman Prykhodchenko m...@romcheg.me написав(ла):
 
 Folks,
 
 today I’m going to make a new public release of Fuel Client.
 If you badly need to merge something before that or have any objections, 
 please let me know before 17:00 CEST (UTC+2).
 
 
 - romcheg
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Release of python-fuelclient today

2015-07-31 Thread Roman Prykhodchenko
Sorry, folks, I pressed a wrong button for replying.

 31 лип. 2015 о 15:56 Roman Prykhodchenko m...@romcheg.me написав(ла):
 
 Привет, нет, обычный how to contribute.
 
 31 лип. 2015 о 15:01 Roman Prykhodchenko m...@romcheg.me написав(ла):
 
 Folks,
 
 today I’m going to make a new public release of Fuel Client.
 If you badly need to merge something before that or have any objections, 
 please let me know before 17:00 CEST (UTC+2).
 
 
 - romcheg
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][fuel-library] Librarian changes

2015-07-31 Thread Aleksandra Fedorova
 So far CI has been successful on all of these
changes, and bvt is currently running.

Small update - BVT test passed.

On Fri, Jul 31, 2015 at 6:27 AM, Alex Schultz aschu...@mirantis.com wrote:
 Hey everyone,

 During on the fuel meeting today we discussed the librarian changes and
 their status.
 As part of this work, the wiki page was updated and a first attempt at
 migrating the
 following modules has been completed pending merge:

 stdlib
 concat
 inifile
 ssh
 ntp
 apache
 firewall
 xinetd
 cinder
 apt*

 It should be noted that apt is currently blocked by the lack of a mirror so
 while it has
 been prepared, it should not be merged at this time.

 As part of this migration we are doing two things. The first is an update to
 the build
 process that is included as part of the initial librarian[0] patch.  The
 other patches
 consist of the actual module code changes.

 Here is the list of the diffs for each change so that it can be reviewed and
 people can
 raise concerns if there are any with this change. As part of the migration,
 I inspected
 the code and file differences for each module to determine how much impact
 they might
 have.  I chose the list of modules based on their minimal differences from
 the upstream
 or if they already had our forked differences rolled into a newer version of
 the module.
 For this list, I took the current stable iso (#110) and rebased the changes
 on top of this
 to create a custom iso with just the librarian changes. We have kicked off a
 bvt_2 test for
 the custom iso as well. From this iso I have extracted the fuel-library
 package from both
 of these isos and exploded the fuel-library folder structure to do the
 diffs.

 Code Changes:

 For stdlib, the only differences are related to git, travis or fixtures[1].
 There are no
 puppet code changes as part of the librarian migration.

 For concat, the only differences were a git folder and in a custom change to
 the spec tests[2].
 The test difference[3], was a change we made because it was failing our
 syntax checker.
 This change has been included in a newer version of concat (1.2.4) but are
 not necessary
 when the module gets moved to be included via librarian.

 For inifile, the only difference is the addition of git and metadata
 files[4].

 For ssh, the only difference is a single line to have the config notify
 service[5]. This
 difference is already covered by another file and is not needed[6].

 For ntp, this change introduces more code changes[7] because we are updating
 the module
 to the 4.0.0 version because of previous extending of functionality that is
 now covered by
 4.0.0 vs 3.3.0[8]. The changes in our fork were upstreamed and are include
 in 4.0.0.

 For apache, this change includes an upgrade from 1.2.0 to 1.3.0[9][10]. Our
 fork had a
 customization made which was contributed upstream.
 (apache::mod::proxy_connect)

 For firewall, this change also includes an upgrade from 1.0.2 to 1.2.0[11]
 as our fork had
 mac supported added[12] in which is now covered upstream.

 For xinetd, the only change was the addition of a .git folder and a
 .gitignore with librarian.

 For cinder, the only change was the addition of .git, .gitignore, and
 .gitreview.

 Once we can get the apt mirror created, the only change for that is also the
 addition of
 .git.


 If there are any of these upgrades/changes that we do not want to tackle
 right now, I can
 adjust the review order such that it can be skipped for now.  Please take
 some time to
 review these changes and raise concerns.  So far CI has been successful on
 all of these
 changes, and bvt is currently running.

 Also please take some time to review the changes themselves:
 https://review.openstack.org/#/q/status:open+project:stackforge/fuel-library+branch:master+topic:bp/fuel-puppet-librarian,n,z

 Please raise any concerns as quickly as possible as this is the last call
 for objections
 for these reviews.  This has been talked about extensively and these reviews
 have
 been available for several weeks now.

 Thanks,
 -Alex


 [0] https://review.openstack.org/#/c/202763/
 [1] http://paste.openstack.org/show/406523/
 [2] http://paste.openstack.org/show/406524/
 [3] http://paste.openstack.org/show/406525/
 [4] http://paste.openstack.org/show/406526/
 [5] http://paste.openstack.org/show/406527/
 [6]
 https://github.com/saz/puppet-ssh/blob/v2.4.0/manifests/server/config.pp#L9
 [7] http://paste.openstack.org/show/406536/
 [8] https://github.com/puppetlabs/puppetlabs-ntp/compare/3.3.0...4.0.0
 [9] http://paste.openstack.org/show/406538/
 [10] https://github.com/puppetlabs/puppetlabs-apache/compare/1.2.0...1.3.0
 [11] https://github.com/puppetlabs/puppetlabs-firewall/compare/1.0.2...1.2.0
 [12] https://review.openstack.org/#/c/92167/

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [openstack-ansible] [os-ansible-deployment] Kilo - Liberty Upgrade Problems

2015-07-31 Thread Jesse Pretorius
I'm adding openstack-operators too as this is a discussion that I think it
would be useful to have their input for.

On 30 July 2015 at 19:18, Ian Cordasco ian.corda...@rackspace.com wrote:

 Hey all,

 As you may have seen elsewhere on openstack-dev, OpenStack is changing the
 versioning for the service projects. This means our previous upgrade
 solution will not continue to work. For context, one of our project's
 goals is to have in-place upgrades be a reality. Previously, using our
 repository (package) mirror doing:

 # pip install -U {{servicename}}

 Was perfectly fine. The problem will now occur that 2015.1.0 (kilo) is
 more recent than any of the new service version numbers (as far as pip is
 concerned). This would be resolved if the release management team for
 OpenStack properly used an epoch to indicate that the versioning scheme is
 fundamentally different and something like Glance 8.0.0 should sort after
 Glance 2015.1.0, but they won't (for reasons that you can read in earlier
 threads on this list).


Yes. This is going to cause quite a few headaches I'm sure.


 So, in order to satisfy the goal of in-place upgrades, we need a way
 around this. Currently, we use a tool called 'yaprt' to build wheels and
 repository indices for our project. This tool can (and currently does)
 create reports of the built files and exports those reports as JSON. We
 can use this to know the version generated for a service and then instead
 do:

 # pip install {{servicename}}=={{yaprt_generated_version}}

 This will force pip to ignore the fact that the existing (kilo)
 installation is actually supposed to sort as more recent because you're
 telling it to install a very specific version. This will likely need to be
 our upgrade path going forward unless we also require operators to clear
 out their existing repository mirror of packages with Kilo versions (and
 then we can go back to relying on pip's version sorting semantics to do
 pip install -U {{servicename}}).


So what would the resulting version be? Would the python wheel be 2016.x.x
or would the file simply be named that so that we're sitting with this
workaround for only one cycle and future cycles can revert to the previous
process?


 This is, at the moment, the seemingly simplest way to work around the
 brokenness that is the upstream versioning change.

 If you can think of a different way of approaching this, we'd love to hear
 it. If not, Kevin or myself will probably start working on this approach
 in a week or two so it's ready for when Liberty is actually released and
 we can start testing upgrades from the kilo branch to master (which is
 currently tracking liberty).


This is not a fun problem to have to solve, but it seems a reasonable
solution. Whatever we do I'd prefer to see it as a solution that we only
have to carry for one cycle so that all versioning matches upstream from
then on. If that's not possible then some sort of epoch-style workaround
like this may just be something we have to live with.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nova-database update

2015-07-31 Thread Pradeep Kiruvale
Hi All,


I am new to openstack. I have a problem in understanding of Nova-database
update with the resources information.

My understanding after going through the architecture documents and code is
Nova-Database is the one which stores the resource information about the
compute servers. The Nova-Scheduler is responsible to place a VM on a
correct compute node where the resources are available. Nova-compute is
responsible for creating the VMs/Containers etc.

The question here is who actually updates the in real time the resource
information in the Nova-Database by reading the resources information from
the compute servers/nodes?

I have read about agents are responsible for this, but I dint get which
code does that. Can any one please help me in clarifying my doubt and
please guide me for the code  which does that.


Thanks in advance

Cheers,
Pradeep
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Puppet] Keystone V2/V3 service endpoints

2015-07-31 Thread Matthew Mosesohn
Jesse, thanks for raising this. Like you, I should just track upstream
and wait for full V3 support.

I've taken the quickest approach and written fixes to
puppet-openstacklib and puppet-keystone:
https://review.openstack.org/#/c/207873/
https://review.openstack.org/#/c/207890/

and again to Fuel-Library:
https://review.openstack.org/#/c/207548/1

I greatly appreciate the quick support from the community to find an
appropriate solution. Looks like I'm just using a weird edge case
where we're creating users on a separate node from where keystone is
installed and it never got thoroughly tested, but I'm happy to fix
bugs where I can.

-Matthew

On Fri, Jul 31, 2015 at 3:54 PM, Jesse Pretorius
jesse.pretor...@gmail.com wrote:
 With regards to converting all services to use Keystone v3 endpoints, note
 the following:

 1) swift-dispersion currently does not support consuming Keystone v3
 endpoints [1]. There is a patch merged to master [2] to fix that, but a
 backport to kilo is yet to be done.
 2) Each type (internal, admin, public) of endpoint created with the Keystone
 v3 API has its own unique id, unlike with the v2 API where they're all
 created with a single ID. This results in the keystone client being unable
 to read the catalog created via the v3 API when querying via the v2 API. The
 solution is to use the openstack client and to use the v3 API but this
 obviously needs to be noted for upgrade impact and operators.
 3) When glance is setup to use swift as a back-end, glance_store is unable
 to authenticate to swift when the endpoint it uses is a v3 endpoint. There
 is a review to master in progress [3] to fix this which is unlikely to make
 it into kilo.

 We (the openstack-ansible/os-ansible-deployment project) are tracking these
 issues and doing tests to figure out all the bits. These are the bugs we've
 hit so far. Also note that there is a WIP patch to gate purely on Keystone
 v3 API's which is planned to become voting (hopefully) by the end of this
 cycle.

 [1] https://bugs.launchpad.net/swift/+bug/1468374
 [2] https://review.openstack.org/195131
 [3] https://review.openstack.org/193422

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tempest] kwargs of service clients for POST/PUT methods

2015-07-31 Thread Jordan Pittier
Hi,
So after I took a lot at Ken'ichi's recent proposed changes, I think this
is the good approach. The kwargs approach has the good benefit of being
generic so that if a consumer (say Nova) of the client class wants to add a
new parameter to one of its API, it can do it without the need of updating
the client class. This makes the client class more lightweight and should
ease its adoption.

But I'd like to hear what other Tempest developers thing about that ? ( the
topic has been mentioned in yesterday's QA meeting).

Jordan

On Fri, Jul 10, 2015 at 9:02 AM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
wrote:

 Hi Anne,

 2015-07-09 12:22 GMT+09:00 Anne Gentle annegen...@justwriteclick.com:
  On Wed, Jul 8, 2015 at 9:48 PM, GHANSHYAM MANN ghanshyamm...@gmail.com
  wrote:
  On Thu, Jul 9, 2015 at 9:39 AM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
 
  wrote:
   2015-07-08 16:42 GMT+09:00 Ken'ichi Ohmichi ken1ohmi...@gmail.com:
   2015-07-08 14:07 GMT+09:00 GHANSHYAM MANN ghanshyamm...@gmail.com:
   On Wed, Jul 8, 2015 at 12:27 PM, Ken'ichi Ohmichi
   ken1ohmi...@gmail.com wrote:
  
   By defining all parameters on each method like update_quota_set(),
 it
   is easy to know what parameters are available from caller/programer
   viewpoint.
  
   I think this can be achieved with former approach also by defining
 all
   expected param in doc string properly.
  
   You are exactly right.
   But current service clients contain 187 methods *only for Nova* and
   most methods don't contain enough doc string.
   So my previous hope which was implied was we could avoid writing doc
   string with later approach.
  
   I am thinking it is very difficult to maintain doc string of REST APIs
   in tempest-lib because APIs continue changing.
   So instead of doing it, how about putting the link of official API
   document[1] in tempest-lib and concentrating on maintaining official
   API document?
   OpenStack APIs are huge now and It doesn't seem smart to maintain
   these docs at different places.
  
 
  ++, this will be great. Even API links can be provided in both class
  doc string as well as each method doc string (link to specific API).
  This will improve API ref docs quality and maintainability.
 
 
  Agreed, though I also want to point you to this doc specification. We
 hope
  it will help with the maintenance of the API docs.
 
  https://review.openstack.org/#/c/177934/
 
  I also want Tempest maintainers to start thinking about how a diff
  comparison can help with reviews of any changes to the API itself. We
 have a
  proof of concept and need to do additional work to ensure it works for
  multiple OpenStack APIs.

 Thanks for your feedback,
 That will be a big step for improving the API docs, I also like to
 join for working together.

 Thanks
 Ken Ohmichi

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Puppet] Keystone V2/V3 service endpoints

2015-07-31 Thread Jesse Pretorius
With regards to converting all services to use Keystone v3 endpoints, note
the following:

1) swift-dispersion currently does not support consuming Keystone v3
endpoints [1]. There is a patch merged to master [2] to fix that, but a
backport to kilo is yet to be done.
2) Each type (internal, admin, public) of endpoint created with the
Keystone v3 API has its own unique id, unlike with the v2 API where they're
all created with a single ID. This results in the keystone client being
unable to read the catalog created via the v3 API when querying via the v2
API. The solution is to use the openstack client and to use the v3 API but
this obviously needs to be noted for upgrade impact and operators.
3) When glance is setup to use swift as a back-end, glance_store is unable
to authenticate to swift when the endpoint it uses is a v3 endpoint. There
is a review to master in progress [3] to fix this which is unlikely to make
it into kilo.

We (the openstack-ansible/os-ansible-deployment project) are tracking these
issues and doing tests to figure out all the bits. These are the bugs we've
hit so far. Also note that there is a WIP patch to gate purely on Keystone
v3 API's which is planned to become voting (hopefully) by the end of this
cycle.

[1] https://bugs.launchpad.net/swift/+bug/1468374
[2] https://review.openstack.org/195131
[3] https://review.openstack.org/193422
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Puppet] Keystone V2/V3 service endpoints

2015-07-31 Thread Rich Megginson

On 07/31/2015 07:18 AM, Matthew Mosesohn wrote:

Jesse, thanks for raising this. Like you, I should just track upstream
and wait for full V3 support.

I've taken the quickest approach and written fixes to
puppet-openstacklib and puppet-keystone:
https://review.openstack.org/#/c/207873/
https://review.openstack.org/#/c/207890/

and again to Fuel-Library:
https://review.openstack.org/#/c/207548/1

I greatly appreciate the quick support from the community to find an
appropriate solution. Looks like I'm just using a weird edge case
where we're creating users on a separate node from where keystone is
installed and it never got thoroughly tested, but I'm happy to fix
bugs where I can.


Most puppet deployments either realize all keystone resources on the 
keystone node, or drop an /etc/keystone/keystone.conf with admin token 
onto non-keystone nodes where additional keystone resources need to be 
realized.




-Matthew

On Fri, Jul 31, 2015 at 3:54 PM, Jesse Pretorius
jesse.pretor...@gmail.com wrote:

With regards to converting all services to use Keystone v3 endpoints, note
the following:

1) swift-dispersion currently does not support consuming Keystone v3
endpoints [1]. There is a patch merged to master [2] to fix that, but a
backport to kilo is yet to be done.
2) Each type (internal, admin, public) of endpoint created with the Keystone
v3 API has its own unique id, unlike with the v2 API where they're all
created with a single ID. This results in the keystone client being unable
to read the catalog created via the v3 API when querying via the v2 API. The
solution is to use the openstack client and to use the v3 API but this
obviously needs to be noted for upgrade impact and operators.
3) When glance is setup to use swift as a back-end, glance_store is unable
to authenticate to swift when the endpoint it uses is a v3 endpoint. There
is a review to master in progress [3] to fix this which is unlikely to make
it into kilo.

We (the openstack-ansible/os-ansible-deployment project) are tracking these
issues and doing tests to figure out all the bits. These are the bugs we've
hit so far. Also note that there is a WIP patch to gate purely on Keystone
v3 API's which is planned to become voting (hopefully) by the end of this
cycle.

[1] https://bugs.launchpad.net/swift/+bug/1468374
[2] https://review.openstack.org/195131
[3] https://review.openstack.org/193422

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-07-31 Thread Joshua Harlow

Mike Perez wrote:

On Mon, Jul 27, 2015 at 12:35 PM, Gorka Eguileorgegui...@redhat.com  wrote:

I know we've all been looking at the HA Active-Active problem in Cinder
and trying our best to figure out possible solutions to the different
issues, and since current plan is going to take a while (because it
requires that we finish first fixing Cinder-Nova interactions), I've been
looking at alternatives that allow Active-Active configurations without
needing to wait for those changes to take effect.

And I think I have found a possible solution, but since the HA A-A
problem has a lot of moving parts I ended up upgrading my initial
Etherpad notes to a post [1].

Even if we decide that this is not the way to go, which we'll probably
do, I still think that the post brings a little clarity on all the
moving parts of the problem, even some that are not reflected on our
Etherpad [2], and it can help us not miss anything when deciding on a
different solution.


Based on IRC conversations in the Cinder room and hearing people's
opinions in the spec reviews, I'm not convinced the complexity that a
distributed lock manager adds to Cinder for both developers and the
operators who ultimately are going to have to learn to maintain things
like Zoo Keeper as a result is worth it.

**Key point**: We're not scaling Cinder itself, it's about scaling to
avoid build up of operations from the storage backend solutions
themselves.

Whatever people think ZooKeeper scaling level is going to accomplish
is not even a question. We don't need it, because Cinder isn't as
complex as people are making it.


I agree with 'cinder isn't as complex as people are making it' and that 
is very likely a good thing to keep in mind, whether zookeeper can help 
or not is a different question. Zookeeper imho is just another tool in 
your toolset/belt, and as with any tool u have to know when to use it 
(of course you can also just continue using chisels and such to); I'd 
rather people see that it is just that and avoid getting caught up on 
the other aspects prematurely.


...random thought here, skip as needed... in all honesty orchestration 
solutions like mesos 
(http://mesos.apache.org/assets/img/documentation/architecture3.jpg), 
map-reduce solutions like hadoop, stream processing systems like apache 
storm (...), are already using zookeeper and I'm not saying we should 
just use it cause they are, but the likelihood that they just picked it 
for no reason are imho slim.




I'd like to think the Cinder team is a great in recognizing potential
cross project initiatives. Look at what Thang Pham has done with
Nova's version object solution. He made a generic solution into an
Oslo solution for all, and Cinder is using it. That was awesome, and
people really appreciated that there was a focus for other projects to
get better, not just Cinder.

Have people consider Ironic's hash ring solution? The project Akanda
is now adopting it [1], and I think it might have potential. I'd
appreciate it if interested parties could have this evaluated before
the Cinder midcycle sprint next week, to be ready for discussion.

[1] - https://review.openstack.org/#/c/195366/

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova-database update

2015-07-31 Thread Matt Riedemann



On 7/31/2015 9:27 AM, Pradeep Kiruvale wrote:

Hi All,


I am new to openstack. I have a problem in understanding of
Nova-database update with the resources information.

My understanding after going through the architecture documents and code
is Nova-Database is the one which stores the resource information about
the compute servers. The Nova-Scheduler is responsible to place a VM on
a correct compute node where the resources are available. Nova-compute
is responsible for creating the VMs/Containers etc.

The question here is who actually updates the in real time the resource
information in the Nova-Database by reading the resources information
from the compute servers/nodes?

I have read about agents are responsible for this, but I dint get which
code does that. Can any one please help me in clarifying my doubt and
please guide me for the code  which does that.


Thanks in advance

Cheers,
Pradeep





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Mostly nova-conductor but sometimes also nova-api.  Pretty much anything 
that imports nova.db.api.


This might be helpful:

http://docs.openstack.org/developer/nova/architecture.html?highlight=architecture

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][fuel-library] Librarian changes

2015-07-31 Thread Vladimir Kuklin
Okay, folks, we had a short meeting to synchronize our vision of how it
should happen.

We will start by merging least-invasive modules like stdlib today and then
continue doing merges one by one in discrete manner and revert things
immediately if something goes wrong.

So there is a list of action items:

Alex Schultz will send a schedule of which modules will be merged on which
week and ensure that core reviewers know which commits they should merge
when either by keeping W-1 on particular commits or by sharing the schedule
in commit message so that noone can forget about it or maybe some other
conveinent method that he can invent.

I will remove my -2 for the inital librarian commit.


Thanks everyone for the collaboration and not calling me a selfish lunatic
:-)


On Fri, Jul 31, 2015 at 6:29 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Vladimir,
 can you please elaborate on such invasive changes?

 There was a plan developed, including risk mitigation, etc. - like do
 'grep -r' to check, and revert the change all together right away if we see
 regression. So far, everyone was aligned with the plan. It was discussed
 yesterday during IRC meeting [1]. Again, no one had objections.

 Please provide your concerns, explain your opinion in more details. I'd
 like other core reviewers to jump in here and reply. If you need details of
 the approach, please jump in a call with Alex Schultz.

 Thank you,

 [1]
 http://eavesdrop.openstack.org/meetings/fuel/2015/fuel.2015-07-30-16.00.html

 On Fri, Jul 31, 2015 at 5:52 AM Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Folks

 I do actively support our initiative to use librarian to be as close as
 possible to upstream, but let's not merge such invasive changes until we
 announce Hard Code Freeze and create stable/7.0 branch. So far I put my -2
 onto the first commit in the chain. Let's get through 7.0 and then land
 this code in the master as early as possible after HCF.


 On Fri, Jul 31, 2015 at 3:21 PM, Aleksandra Fedorova 
 afedor...@mirantis.com wrote:

  So far CI has been successful on all of these
 changes, and bvt is currently running.

 Small update - BVT test passed.

 On Fri, Jul 31, 2015 at 6:27 AM, Alex Schultz aschu...@mirantis.com
 wrote:
  Hey everyone,
 
  During on the fuel meeting today we discussed the librarian changes and
  their status.
  As part of this work, the wiki page was updated and a first attempt at
  migrating the
  following modules has been completed pending merge:
 
  stdlib
  concat
  inifile
  ssh
  ntp
  apache
  firewall
  xinetd
  cinder
  apt*
 
  It should be noted that apt is currently blocked by the lack of a
 mirror so
  while it has
  been prepared, it should not be merged at this time.
 
  As part of this migration we are doing two things. The first is an
 update to
  the build
  process that is included as part of the initial librarian[0] patch.
 The
  other patches
  consist of the actual module code changes.
 
  Here is the list of the diffs for each change so that it can be
 reviewed and
  people can
  raise concerns if there are any with this change. As part of the
 migration,
  I inspected
  the code and file differences for each module to determine how much
 impact
  they might
  have.  I chose the list of modules based on their minimal differences
 from
  the upstream
  or if they already had our forked differences rolled into a newer
 version of
  the module.
  For this list, I took the current stable iso (#110) and rebased the
 changes
  on top of this
  to create a custom iso with just the librarian changes. We have kicked
 off a
  bvt_2 test for
  the custom iso as well. From this iso I have extracted the fuel-library
  package from both
  of these isos and exploded the fuel-library folder structure to do the
  diffs.
 
  Code Changes:
 
  For stdlib, the only differences are related to git, travis or
 fixtures[1].
  There are no
  puppet code changes as part of the librarian migration.
 
  For concat, the only differences were a git folder and in a custom
 change to
  the spec tests[2].
  The test difference[3], was a change we made because it was failing our
  syntax checker.
  This change has been included in a newer version of concat (1.2.4) but
 are
  not necessary
  when the module gets moved to be included via librarian.
 
  For inifile, the only difference is the addition of git and metadata
  files[4].
 
  For ssh, the only difference is a single line to have the config notify
  service[5]. This
  difference is already covered by another file and is not needed[6].
 
  For ntp, this change introduces more code changes[7] because we are
 updating
  the module
  to the 4.0.0 version because of previous extending of functionality
 that is
  now covered by
  4.0.0 vs 3.3.0[8]. The changes in our fork were upstreamed and are
 include
  in 4.0.0.
 
  For apache, this change includes an upgrade from 1.2.0 to
 1.3.0[9][10]. Our
  fork had a
  customization made which was contributed upstream.
 

Re: [openstack-dev] [fuel][fuel-library] Librarian changes

2015-07-31 Thread Mike Scherbakov
Vladimir,
can you please elaborate on such invasive changes?

There was a plan developed, including risk mitigation, etc. - like do 'grep
-r' to check, and revert the change all together right away if we see
regression. So far, everyone was aligned with the plan. It was discussed
yesterday during IRC meeting [1]. Again, no one had objections.

Please provide your concerns, explain your opinion in more details. I'd
like other core reviewers to jump in here and reply. If you need details of
the approach, please jump in a call with Alex Schultz.

Thank you,

[1]
http://eavesdrop.openstack.org/meetings/fuel/2015/fuel.2015-07-30-16.00.html

On Fri, Jul 31, 2015 at 5:52 AM Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Folks

 I do actively support our initiative to use librarian to be as close as
 possible to upstream, but let's not merge such invasive changes until we
 announce Hard Code Freeze and create stable/7.0 branch. So far I put my -2
 onto the first commit in the chain. Let's get through 7.0 and then land
 this code in the master as early as possible after HCF.


 On Fri, Jul 31, 2015 at 3:21 PM, Aleksandra Fedorova 
 afedor...@mirantis.com wrote:

  So far CI has been successful on all of these
 changes, and bvt is currently running.

 Small update - BVT test passed.

 On Fri, Jul 31, 2015 at 6:27 AM, Alex Schultz aschu...@mirantis.com
 wrote:
  Hey everyone,
 
  During on the fuel meeting today we discussed the librarian changes and
  their status.
  As part of this work, the wiki page was updated and a first attempt at
  migrating the
  following modules has been completed pending merge:
 
  stdlib
  concat
  inifile
  ssh
  ntp
  apache
  firewall
  xinetd
  cinder
  apt*
 
  It should be noted that apt is currently blocked by the lack of a
 mirror so
  while it has
  been prepared, it should not be merged at this time.
 
  As part of this migration we are doing two things. The first is an
 update to
  the build
  process that is included as part of the initial librarian[0] patch.  The
  other patches
  consist of the actual module code changes.
 
  Here is the list of the diffs for each change so that it can be
 reviewed and
  people can
  raise concerns if there are any with this change. As part of the
 migration,
  I inspected
  the code and file differences for each module to determine how much
 impact
  they might
  have.  I chose the list of modules based on their minimal differences
 from
  the upstream
  or if they already had our forked differences rolled into a newer
 version of
  the module.
  For this list, I took the current stable iso (#110) and rebased the
 changes
  on top of this
  to create a custom iso with just the librarian changes. We have kicked
 off a
  bvt_2 test for
  the custom iso as well. From this iso I have extracted the fuel-library
  package from both
  of these isos and exploded the fuel-library folder structure to do the
  diffs.
 
  Code Changes:
 
  For stdlib, the only differences are related to git, travis or
 fixtures[1].
  There are no
  puppet code changes as part of the librarian migration.
 
  For concat, the only differences were a git folder and in a custom
 change to
  the spec tests[2].
  The test difference[3], was a change we made because it was failing our
  syntax checker.
  This change has been included in a newer version of concat (1.2.4) but
 are
  not necessary
  when the module gets moved to be included via librarian.
 
  For inifile, the only difference is the addition of git and metadata
  files[4].
 
  For ssh, the only difference is a single line to have the config notify
  service[5]. This
  difference is already covered by another file and is not needed[6].
 
  For ntp, this change introduces more code changes[7] because we are
 updating
  the module
  to the 4.0.0 version because of previous extending of functionality
 that is
  now covered by
  4.0.0 vs 3.3.0[8]. The changes in our fork were upstreamed and are
 include
  in 4.0.0.
 
  For apache, this change includes an upgrade from 1.2.0 to 1.3.0[9][10].
 Our
  fork had a
  customization made which was contributed upstream.
  (apache::mod::proxy_connect)
 
  For firewall, this change also includes an upgrade from 1.0.2 to
 1.2.0[11]
  as our fork had
  mac supported added[12] in which is now covered upstream.
 
  For xinetd, the only change was the addition of a .git folder and a
  .gitignore with librarian.
 
  For cinder, the only change was the addition of .git, .gitignore, and
  .gitreview.
 
  Once we can get the apt mirror created, the only change for that is
 also the
  addition of
  .git.
 
 
  If there are any of these upgrades/changes that we do not want to tackle
  right now, I can
  adjust the review order such that it can be skipped for now.  Please
 take
  some time to
  review these changes and raise concerns.  So far CI has been successful
 on
  all of these
  changes, and bvt is currently running.
 
  Also please take some time to review the changes themselves:
 
 

[openstack-dev] [Congress] meeting time change

2015-07-31 Thread Tim Hinrichs
Hi all,

We managed to find a day/time where all the active contributors can attend
(without being up too early/late).  The room, day, and time have all
changed.

Room: #openstack-meeting-2
Time: Wednesday 5p Pacific = Thursday midnight UTC

Next week we begin with this new schedule.

And don't forget that next week Thu/Fri is our Mid-cycle sprint.  Hope to
see you there!

Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova, cinder, neutron] quota-update tenant-name bug

2015-07-31 Thread Fox, Kevin M
Ah. fixing it in python neutron client would be just fine too. Its just the 
unfortunate usability issue that your on the cli, try and update the tenant's 
quota and it says ok, done and it didn't actually do the thing you thought it 
did. If your using the api or rest, the documentation should be ok to handle 
the tenant_id only thing.

Thanks,
Kevin


From: Salvatore Orlando [salv.orla...@gmail.com]
Sent: Friday, July 31, 2015 12:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova, cinder, neutron] quota-update tenant-name 
bug

More comments inline.

Salvatore

On 31 July 2015 at 01:47, Kevin Benton 
blak...@gmail.commailto:blak...@gmail.com wrote:
The issue is that the Neutron credentials might not have privileges to resolve 
the name to a UUID. I suppose we could just fail in that case.


As quota-update is usually restricted to admin users this should not be a 
problem, unless the deployment uses per-service admin users.


Let's see what happens with the nova spec Salvatore linked.

That spec seems stuck to me. I think the reason is lack of reasons for raising 
its priority.


On Thu, Jul 30, 2015 at 4:33 PM, Fox, Kevin M 
kevin@pnnl.govmailto:kevin@pnnl.gov wrote:
If the quota update resolved the name to a uuid before it updated the quota by 
uuid, I think it would resolve the issues? You'd just have to check if keystone 
was in use, and then do the extra resolve on update. I think the rest of the 
stuff can just remain using uuids?

Once you accept that it's not a big deal to do a round trip to keystone, then 
we can do whatever we want. If there is value from a API usability perspective 
we'll just do that.
If the issue is instead more the CLI UX, I would consider doing resolving the 
name (and possibly validating the tenant uuid) in python-neutronclient.

Also, I've checked the docs [1] and [2] and neutron quota-update is not 
supposed to accept tenant name - so probably the claim made in the initial post 
on this thread did not apply to neutron after all.


Thanks,
Kevin

From: Kevin Benton [blak...@gmail.commailto:blak...@gmail.com]
Sent: Thursday, July 30, 2015 4:22 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova, cinder, neutron] quota-update tenant-name 
bug

Good point. Unfortunately the other issues are going to be the hard part to 
deal with. I probably shouldn't have brought up performance as a complaint at 
this stage. :)

On Thu, Jul 30, 2015 at 3:26 AM, Fox, Kevin M 
kevin@pnnl.govmailto:kevin@pnnl.gov wrote:
Can a non admin update quotas? Quota updates are rare. Performance of them can 
take the hit.

Thanks,
Kevin


From: Kevin Benton
Sent: Wednesday, July 29, 2015 10:44:49 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova, cinder, neutron] quota-update tenant-name 
bug

Dev lessons learned: we need to validate better our inputs and refuse to 
update a tenant-id that does not exist.

This is something that has come up in Neutron discussions before. There are two 
issues here:
1. Performance: it will require a round-trip to Keystone on every request.
2. If the Neutron keystone user in unprivileged and the request context is 
unprivileged, we might not actually be allowed to tell if the tenant exists.

The first we can deal with, but the second is going to be an issue that we 
might not be able to get around.

How about as a temporary solution, we just confirm that the input is a UUID so 
names don't get used?

On Wed, Jul 29, 2015 at 10:19 PM, Bruno L 
teolupus@gmail.commailto:teolupus@gmail.com wrote:
This is probably affecting other people as well, so hopefully message will 
avoid some headaches.

[nova,cinder,neutron] will allow you to do a quota-update using the tenant-name 
(instead of tenant-id). They will also allow you to do a quota-show tenant-name 
and get the expected values back.

Then you go to the tenant and end up surprised that the quotas have not been 
applied and you can still do things you were not supposed to.

It turns out that [nova,cinder,neutron] just created an entry on the quota 
table, inserting the tenant-name on the tenant-id field.

Surprise, surprise!

Ops lessons learned: use the tenant-id!

Dev lessons learned: we need to validate better our inputs and refuse to update 
a tenant-id that does not exist.

I have documented this behaviour on 
https://bugs.launchpad.net/neutron/+bug/1399065 and 
https://bugs.launchpad.net/neutron/+bug/1317515. I can reproduce it in IceHouse.

Could someone please confirm if this is still the case on master? If not, which 
version of OpenStack addressed that?

Thanks,
Bruno

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] Proposing Kanagaraj Manickam and Ethan Lynn for heat-core

2015-07-31 Thread Zane Bitter

You forgot the [heat] tag ;)

On 31/07/15 00:35, Steve Baker wrote:

I believe the heat project would benefit from Kanagaraj Manickam and
Ethan Lynn having the ability to approve heat changes.


+1 for both and, at the risk of counting votes prematurely, welcome!

- ZB


Their reviews are valuable[1][2] and numerous[3], and both have been
submitting useful commits in a variety of areas in the heat tree.

Heat cores, please express your approval with a +1 / -1.

[1] http://stackalytics.com/?user_id=kanagaraj-manickammetric=marks
[2] http://stackalytics.com/?user_id=ethanlynnmetric=marks
[3] http://stackalytics.com/report/contribution/heat-group/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] [os-ansible-deployment] Kilo - Liberty Upgrade Problems

2015-07-31 Thread Ian Cordasco


On 7/31/15, 07:40, Jesse Pretorius jesse.pretor...@gmail.com wrote:

I'm adding openstack-operators too as this is a discussion that I think
it would be useful to have their input for.

I've removed them from the CC since I don't think we're supposed to
cross-post.

So what would the resulting version be? Would the python wheel be
2016.x.x or would the file simply be named that so that we're sitting
with this workaround for only one cycle and future cycles can revert to
the previous process?
 

So, I'm not proposing that yaprt be updated to forcibly add epochs or
anything else. What yaprt will generate is exactly what upstream
specifies. For example, glance would be version 11.0.0, nova would be
12.0.0, etc.

Giving a more concrete example (which I should have done in the original
message):

If we're upgrading Glance in the containers, we'll be upgrading from

glance==2015.1.0 (obviously not exactly that, but for the sake of this
example bear with me)

To 

glance==11.0.0

And we'll use the version that yaprt reports to do

# pip install glance==11.0.0

So the repositories that we build can have both 2015.1.0 and 11.0.0 in it
and pip will always install 11.0.0.

This allows for

1. In-place upgrades
2. Reproducibility
3. Stability
4. Side-stepping upstream refusing to use epochs

This is not a fun problem to have to solve, but it seems a reasonable
solution. Whatever we do I'd prefer to see it as a solution that we only
have to carry for one cycle so that all versioning matches upstream from
then on. If that's not possible then
 some sort of epoch-style workaround like this may just be something we
have to live with.

Yeah, this could be held onto for a single cycle, certainly, but it is
really just good practice at this point given the volatility we should be
expecting from upstream versioning at this point. If we're going to go
back and forth between versioning schemes without epochs because they're
ugly, we may as well simply always pin the version we're installing in
containers. It has the same effect as we previously had by building local
repositories and always installing the latest (which was always the
version we most recently built). This means that we meed to read the yaprt
report (which is JSON and will be trivial) and incorporate its information
into our service-specific roles. It should be as low impact as I can
imagine an upgrade-specific change like this being.

Does that make sense?

Cheers,
Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ironic] Scheduler filtering based on instances

2015-07-31 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

The scheduler in nova has a few filters (TypeAffinityFilter,
SameHostFilter, DifferentHostFilter) that rely on information about
the instances that already exist on a given host. Before Kilo, these
filters made direct DB calls in order to make their decisions. This
was changed in Kilo to have the HostManager pre-populate the host with
its instances, either from an in-memory view in the scheduler, or, if
that update option was turned off, by a single DB query per host per
request.

A few days ago, Jim Rollenhagen noticed that this instance query was
causing severe problems for an ironic installation. It was felt that
such a query made no sense for ironic, as there would typically be a
single 'host' for all ironic nodes, and that the query was returning
hundreds of instances, none of which were needed.

Our initial solution [1] was to simply disable instance queries in the
IronicHostManager, as they were never needed, and were painful to
execute. But subsequent discussion on IRC brought up some potential
(although not very likely) use cases where this might not be true. One
example was if there were two computes each talking to one of two
ironic installations, one might use the SameHostFilter to ensure that
a new ironic instance was placed in the same ironic environment as
another instance. While this seemed possible, the idea of querying
hundreds or thousands of instances to see if the target was in that
group seemed like overkill. I suggested that if this were ever
something that someone needed, creating a new filter would be a better
way forward than trying to retrofit the virt design onto ironic.

So we're looking for input on the wisdom of either approach, or ideas
for a different approach. If you have experience with real-world
ironic deployments, please share your insights.

[1] https://review.openstack.org/#/c/206736/

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJVu6MvAAoJEKMgtcocwZqLnusP/1D0xrT86mF+FUM/75gVGMlr
xLD5DPRDBvLrrVIG7AqXAMAHx2pnVKuY5K5h6WLQkTItpvNTgv8MEDM8+Le+8EwA
42NCieQA0hzHULNzXwXRjLVBQ1BBUUtVuvTLslp7bUjsr9ftYfABaEeHA9uOPQmR
Z1oadQAwDoUPF2LyVLCWuw2Fo27ewUKcHBeouLlCeVtE/k9FcLByf9q67Ffru7sZ
TG9gI2UlWyUXSuijWJhzZPP1degEDC6+h774gSGdgsZVXk1+LGWfujg7iXcMZUr5
ZOWz3V8AjP3cWdTf38Hb2sjJEtskC8tCEDFfRiIZQCR3rSeJEfVTq0sglQWYuPSC
x6PqVp1pzED/o6VDtluseoLBbKOjmnR6LL7X163W9L56LXntwXaCiwq8rrijt1/c
89KYkVVbltXJC71CR6RSB+7iM9T8CmaN7Usg9sTKv/yGFQw2D8eZJZ/iCIATIU03
ThRpyPFFNPdoFctwoIFlN4JDrjPq/kxd34yiK/50MqXbbiURpLbkMJloqGWBuOtP
I9SAgK8tjAFr3dTJuZvBwzg/WOX4TBhwF91aoAizKnpyDU6Pw1FCTsLWEaAacsrU
ko4V/Ntf7ATr6QzF/v2a7ZV7iqIFX+tKkK/jGu/kI8HxIDB+T2UXuVa+aERLPSyg
wdg6BpK6ZS2GFd8Sj6bW
=juot
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][fuel-library] Librarian changes

2015-07-31 Thread Vladimir Kuklin
Folks

I do actively support our initiative to use librarian to be as close as
possible to upstream, but let's not merge such invasive changes until we
announce Hard Code Freeze and create stable/7.0 branch. So far I put my -2
onto the first commit in the chain. Let's get through 7.0 and then land
this code in the master as early as possible after HCF.


On Fri, Jul 31, 2015 at 3:21 PM, Aleksandra Fedorova afedor...@mirantis.com
 wrote:

  So far CI has been successful on all of these
 changes, and bvt is currently running.

 Small update - BVT test passed.

 On Fri, Jul 31, 2015 at 6:27 AM, Alex Schultz aschu...@mirantis.com
 wrote:
  Hey everyone,
 
  During on the fuel meeting today we discussed the librarian changes and
  their status.
  As part of this work, the wiki page was updated and a first attempt at
  migrating the
  following modules has been completed pending merge:
 
  stdlib
  concat
  inifile
  ssh
  ntp
  apache
  firewall
  xinetd
  cinder
  apt*
 
  It should be noted that apt is currently blocked by the lack of a mirror
 so
  while it has
  been prepared, it should not be merged at this time.
 
  As part of this migration we are doing two things. The first is an
 update to
  the build
  process that is included as part of the initial librarian[0] patch.  The
  other patches
  consist of the actual module code changes.
 
  Here is the list of the diffs for each change so that it can be reviewed
 and
  people can
  raise concerns if there are any with this change. As part of the
 migration,
  I inspected
  the code and file differences for each module to determine how much
 impact
  they might
  have.  I chose the list of modules based on their minimal differences
 from
  the upstream
  or if they already had our forked differences rolled into a newer
 version of
  the module.
  For this list, I took the current stable iso (#110) and rebased the
 changes
  on top of this
  to create a custom iso with just the librarian changes. We have kicked
 off a
  bvt_2 test for
  the custom iso as well. From this iso I have extracted the fuel-library
  package from both
  of these isos and exploded the fuel-library folder structure to do the
  diffs.
 
  Code Changes:
 
  For stdlib, the only differences are related to git, travis or
 fixtures[1].
  There are no
  puppet code changes as part of the librarian migration.
 
  For concat, the only differences were a git folder and in a custom
 change to
  the spec tests[2].
  The test difference[3], was a change we made because it was failing our
  syntax checker.
  This change has been included in a newer version of concat (1.2.4) but
 are
  not necessary
  when the module gets moved to be included via librarian.
 
  For inifile, the only difference is the addition of git and metadata
  files[4].
 
  For ssh, the only difference is a single line to have the config notify
  service[5]. This
  difference is already covered by another file and is not needed[6].
 
  For ntp, this change introduces more code changes[7] because we are
 updating
  the module
  to the 4.0.0 version because of previous extending of functionality that
 is
  now covered by
  4.0.0 vs 3.3.0[8]. The changes in our fork were upstreamed and are
 include
  in 4.0.0.
 
  For apache, this change includes an upgrade from 1.2.0 to 1.3.0[9][10].
 Our
  fork had a
  customization made which was contributed upstream.
  (apache::mod::proxy_connect)
 
  For firewall, this change also includes an upgrade from 1.0.2 to
 1.2.0[11]
  as our fork had
  mac supported added[12] in which is now covered upstream.
 
  For xinetd, the only change was the addition of a .git folder and a
  .gitignore with librarian.
 
  For cinder, the only change was the addition of .git, .gitignore, and
  .gitreview.
 
  Once we can get the apt mirror created, the only change for that is also
 the
  addition of
  .git.
 
 
  If there are any of these upgrades/changes that we do not want to tackle
  right now, I can
  adjust the review order such that it can be skipped for now.  Please take
  some time to
  review these changes and raise concerns.  So far CI has been successful
 on
  all of these
  changes, and bvt is currently running.
 
  Also please take some time to review the changes themselves:
 
 https://review.openstack.org/#/q/status:open+project:stackforge/fuel-library+branch:master+topic:bp/fuel-puppet-librarian,n,z
 
  Please raise any concerns as quickly as possible as this is the last call
  for objections
  for these reviews.  This has been talked about extensively and these
 reviews
  have
  been available for several weeks now.
 
  Thanks,
  -Alex
 
 
  [0] https://review.openstack.org/#/c/202763/
  [1] http://paste.openstack.org/show/406523/
  [2] http://paste.openstack.org/show/406524/
  [3] http://paste.openstack.org/show/406525/
  [4] http://paste.openstack.org/show/406526/
  [5] http://paste.openstack.org/show/406527/
  [6]
 
 

[openstack-dev] [Fuel] Release of python-fuelclient today

2015-07-31 Thread Roman Prykhodchenko
Folks,

today I’m going to make a new public release of Fuel Client.
If you badly need to merge something before that or have any objections, please 
let me know before 17:00 CEST (UTC+2).


- romcheg



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][fuel-library] Librarian changes

2015-07-31 Thread Alex Schultz
Here is the proposed schedule and adjusted ordering based on risk due to
code changes. Please let me know if you would like any of these dates
changed.  You will see that I have bumped modules that were upgrades to
later so that we can stop prior to this changes if we feel they are too
risky.  We should be safe due to zero puppet code changes up until the ssh
module.

Today 7/31
librarian - https://review.openstack.org/#/c/202763/
stdlib - https://review.openstack.org/#/c/203386/

Monday 8/3
concat - https://review.openstack.org/#/c/203387/
inifile - https://review.openstack.org/#/c/203395/

Tuesday 8/4
xinetd - https://review.openstack.org/#/c/203393/
ssh - https://review.openstack.org/#/c/203392/

Wednesday 8/5
ntp - https://review.openstack.org/#/c/203390/

Thursday 8/6
apache - https://review.openstack.org/#/c/203388/

Monday 8/10
firewall - https://review.openstack.org/#/c/203396/

This commit should be signed off by the teams who will ultimately be
responsible for it to ensure they are familiar with the workflows
involved[0]. It has no code changes, but may impact bug fixing so we may
want to wait until 8.0 for this. If not, it should be done last.
cinder - https://review.openstack.org/#/c/203394/

This cannot be merged until we get a fuel-infra mirror, but is a change
with no code changes so can be done whenever the mirror gets resolved.
apt - https://review.openstack.org/#/c/203389/

If at any point there are issues and we decide to revert back, they commits
just need to be reverted in reverse order and we can use the webui to do
so.  I will finish re-ordering the patches today, but we should be OK
through the ntp module at this time.

Thanks,
-Alex

[0] https://wiki.openstack.org/wiki/Fuel/Library_and_Upstream_Modules

On Fri, Jul 31, 2015 at 11:39 AM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Okay, folks, we had a short meeting to synchronize our vision of how it
 should happen.

 We will start by merging least-invasive modules like stdlib today and then
 continue doing merges one by one in discrete manner and revert things
 immediately if something goes wrong.

 So there is a list of action items:

 Alex Schultz will send a schedule of which modules will be merged on which
 week and ensure that core reviewers know which commits they should merge
 when either by keeping W-1 on particular commits or by sharing the schedule
 in commit message so that noone can forget about it or maybe some other
 conveinent method that he can invent.

 I will remove my -2 for the inital librarian commit.


 Thanks everyone for the collaboration and not calling me a selfish lunatic
 :-)


 On Fri, Jul 31, 2015 at 6:29 PM, Mike Scherbakov mscherba...@mirantis.com
  wrote:

 Vladimir,
 can you please elaborate on such invasive changes?

 There was a plan developed, including risk mitigation, etc. - like do
 'grep -r' to check, and revert the change all together right away if we see
 regression. So far, everyone was aligned with the plan. It was discussed
 yesterday during IRC meeting [1]. Again, no one had objections.

 Please provide your concerns, explain your opinion in more details. I'd
 like other core reviewers to jump in here and reply. If you need details of
 the approach, please jump in a call with Alex Schultz.

 Thank you,

 [1]
 http://eavesdrop.openstack.org/meetings/fuel/2015/fuel.2015-07-30-16.00.html

 On Fri, Jul 31, 2015 at 5:52 AM Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Folks

 I do actively support our initiative to use librarian to be as close as
 possible to upstream, but let's not merge such invasive changes until we
 announce Hard Code Freeze and create stable/7.0 branch. So far I put my -2
 onto the first commit in the chain. Let's get through 7.0 and then land
 this code in the master as early as possible after HCF.


 On Fri, Jul 31, 2015 at 3:21 PM, Aleksandra Fedorova 
 afedor...@mirantis.com wrote:

  So far CI has been successful on all of these
 changes, and bvt is currently running.

 Small update - BVT test passed.

 On Fri, Jul 31, 2015 at 6:27 AM, Alex Schultz aschu...@mirantis.com
 wrote:
  Hey everyone,
 
  During on the fuel meeting today we discussed the librarian changes
 and
  their status.
  As part of this work, the wiki page was updated and a first attempt at
  migrating the
  following modules has been completed pending merge:
 
  stdlib
  concat
  inifile
  ssh
  ntp
  apache
  firewall
  xinetd
  cinder
  apt*
 
  It should be noted that apt is currently blocked by the lack of a
 mirror so
  while it has
  been prepared, it should not be merged at this time.
 
  As part of this migration we are doing two things. The first is an
 update to
  the build
  process that is included as part of the initial librarian[0] patch.
 The
  other patches
  consist of the actual module code changes.
 
  Here is the list of the diffs for each change so that it can be
 reviewed and
  people can
  raise concerns if there are any with this change. As part of the
 migration,
  I 

[openstack-dev] [app-catalog] App Catalog IRC meeting minutes - 7/30/2015

2015-07-31 Thread Christopher Aedo
This week we had a great meeting, and have had a lot of good
conversations flowing on the IRC channel.  We're solidifying the next
steps on our roadmap, and Kevin Fox has made great progress on
creating a Horizon panel to allow users to browse the catalog from
Horizon as well as provide a one-click fetch of assets.

One other major change we are discussing is incorporating the voting
and feedback used in ask.openstack.org in order to provide users of
the App Catalog a way to upvote their favorite assets (and downvote
problematic ones), and add comments around any specific asset.

As always, please join us on IRC (#openstack-app-catalog) or speak up
on the mailing list if there are things you would like to see, or
additions that would help improve the App Catalog!

=
#openstack-meeting-3: app-catalog
=

Meeting started by docaedo at 17:00:33 UTC.  The full logs are available
at
http://eavesdrop.openstack.org/meetings/app_catalog/2015/app_catalog.2015-07-30-17.00.log.html

Meeting summary
---
* rollcall  (docaedo, 17:00:52)
* Status updates (docaedo)  (docaedo, 17:03:33)
  * LINK: https://review.openstack.org/194875  (docaedo, 17:03:51)
  * LINK: https://review.openstack.org/#/c/207253/  (docaedo, 17:04:48)
  * LINK: https://youtu.be/9TlPhmml-T8 :)  (kfox, 17:07:26)
* All about the roadmap  (docaedo, 17:15:01)
  * LINK:
http://lists.openstack.org/pipermail/openstack-dev/2015-July/070423.html
(docaedo, 17:16:53)
  * LINK: https://github.com/kfox/apps-catalog-ui/  (kfox,
17:21:20)
  * LINK: https://en.wikipedia.org/wiki/Disqus yep, these guys
(kzaitsev_mb, 17:30:37)
  * ACTION: discuss/consider using ask.openstack.org code for
voting/comments  (docaedo, 17:39:29)
* App Catalog Horizon Plugin Update (kfox)  (docaedo, 17:44:00)
  * LINK: https://youtu.be/9TlPhmml-T8  (kfox, 17:45:56)
  * LINK: https://github.com/kfox/apps-catalog-ui/  (kfox,
17:46:10)
  * LINK: https://review.openstack.org/#/c/206773/  (kzaitsev_mb,
17:46:12)
* Other Horizon Plugins (kfox)  (docaedo, 17:55:33)

Meeting ended at 18:01:16 UTC.

Action items, by person
---
* openstack
  * discuss/consider using ask.openstack.org code for voting/comments

People present (lines said)
---
* kfox (91)
* docaedo (72)
* j^2 (26)
* rhagarty_ (26)
* kzaitsev_mb (25)
* kzaitsev_ip (3)
* openstack (3)
Generated by `MeetBot`_ 0.1.4

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] meeting time change

2015-07-31 Thread Tim Hinrichs
Peter pointed out that no one uses #openstack-meeting-2.  So we'll go with
#openstack-meeting.  Here are the updated meeting details.

Room: #openstack-meeting
Time: Wednesday 5p Pacific = Thursday midnight UTC

There's a change out for review that will update the meeting website once
it merges.
http://eavesdrop.openstack.org/#Congress_Team_Meeting
https://review.openstack.org/#/c/207981/

Tim

On Fri, Jul 31, 2015 at 9:24 AM Tim Hinrichs t...@styra.com wrote:

 Hi all,

 We managed to find a day/time where all the active contributors can attend
 (without being up too early/late).  The room, day, and time have all
 changed.

 Room: #openstack-meeting-2
 Time: Wednesday 5p Pacific = Thursday midnight UTC

 Next week we begin with this new schedule.

 And don't forget that next week Thu/Fri is our Mid-cycle sprint.  Hope to
 see you there!

 Tim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-07-31 Thread Duncan Thomas
On 31 July 2015 at 20:40, Mike Perez thin...@gmail.com wrote:


 Regardless, I want to know if we really need a DLM. Does Ceilometer
 really need a DLM? Does Cinder really need a DLM? Can we just use a
 hash ring solution where operators don't even have to know or care
 about deploying a DLM and running multiple instances of Cinder manager
 just works?



There's a lot of circling around here about what we're trying to achieve
with 'H/A'.

Some people are interested in performance. For them, a hash ring solution
(deterministic load balancing) is fine. If the aim is availability (as mine
is) then I can't see how it helps. I might be missing something, of course
- if so, I'm happy to be corrected.

To be clear, my aim with H/A is to remove the situation where a single node
failure removes the control path for my storage. Currently, the only way to
avoid this is to use something like pacemaker to monitor the c-vol
services. Extensive experience suggests that pacemaker is a complex,
fragile piece of software. Every component of cinder except c-vol can be
deployer active/active[/active/...] - I'm aiming for consistency of
approach if nothing else.

If it ends up that trying to fix this adds too much complexity and/or
fragility to cinder itself, then I can accept that - once whatever we do
ends up being worse than pacemaker, we've taken a significant step
backwards.

Regardless of how H/A discussions go, the first part of Gorka's patch can
certainly be used to fix a few of the API races we have, and can do so with
rather nice, elegant, easy to understand code, so I think the whole process
has been productive whatever the H/A outcome.


-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-07-31 Thread Joshua Harlow

Mike Perez wrote:

On Fri, Jul 31, 2015 at 8:56 AM, Joshua Harlowharlo...@outlook.com  wrote:

...random thought here, skip as needed... in all honesty orchestration
solutions like mesos
(http://mesos.apache.org/assets/img/documentation/architecture3.jpg),
map-reduce solutions like hadoop, stream processing systems like apache
storm (...), are already using zookeeper and I'm not saying we should just
use it cause they are, but the likelihood that they just picked it for no
reason are imho slim.


I'd really like to see focus cross project. I don't want Ceilometer to
depend on Zoo Keeper, Cinder to depend on etcd, etc. This is not ideal
for an operator to have to deploy, learn and maintain each of these
solutions.

I think this is difficult when you consider everyone wants options of
their preferred DLM. If we went this route, we should pick one.


+1



Regardless, I want to know if we really need a DLM. Does Ceilometer
really need a DLM? Does Cinder really need a DLM? Can we just use a
hash ring solution where operators don't even have to know or care
about deploying a DLM and running multiple instances of Cinder manager
just works?


All very good questions, although IMHO a hash-ring is just a piece of 
the puzzle, and is more equivalent to sharding resources, which yes is 
one way to scale as long as each shard never touches anything from the 
other shards. If those shards ever start to need to touch anything 
shared then u get back into this same situation again for a DLM (and at 
that point u really do need the 'distributed' part of DLM, because each 
shard is distributed).


And an few (maybe obvious) questions:

- How would re-sharding work?
- If sharding (the hash-ring partitioning) is based on entities 
(conductors/other) owning a 'bucket' of resources (ie entity 1 manages 
resources A-F, entity 2 manages resources G-M...), what happens if a 
entity dies, does some other entity take over that bucket, what happens 
if that entity really hasn't 'died' but is just disconnected from the 
network (partition tolerance...)? (If the answer is there is a lock on 
the resource/s being used by each entity, then u get back into the LM 
question).


I'm unsure about how ironic handles these problems (although I believe 
they have a hash-ring and still have a locking scheme as well, so maybe 
thats there answer for the dual-entities manipulating the same bucket 
problem).




--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Let's talk about API versions

2015-07-31 Thread Dmitry Tantsur
2015-07-30 22:58 GMT+02:00 Devananda van der Veen devananda@gmail.com:


 On Thu, Jul 30, 2015 at 10:21 AM Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Jim Rollenhagen's message of 2015-07-27 13:35:25 -0700:
  Hi friends.
 
  Ironic implemented API micro versions in Kilo. We originally did this
  to allow for breaking changes in the API while allowing users to opt in
  to the breakage.
 
  Since then, we've had a default version for our client that we bump to
  something sensible with each release. Currently it is at 1.8.
  Negotiation is done with the server to figure out what is supported and
  adjust accordingly.
 
  Now we've landed a patch[0] with a new version (1.11) that is not
  backward compatible. It causes newly added Node objects to begin life in
  the ENROLL state, rather than AVAILABLE. This is a good thing, and
  people should want this! However, it is a breaking change. Automation
  that adds nodes to Ironic will need to do different things after the
  node-create call.
 

 Great discussion. I think through this thread I've gained some insight
 into what is going on, and I think the problem is that minor version
 bumps are not for backward incompatible changes. As a user, I want to
 pin everything and move the pins after I've tested and adapted with new
 versions of things. However, I also don't want to have to micro manage
 this process while I move forward on different schedules with my REST
 clients and the Ironic service.

 So, to be clear, I may have missed what you're trying to do with micro
 versions.

 In my world, a 1.10 - 1.11 would be for adding new methods, new
 arguments, or deprecating (but not removing) an existing piece of the
 API. But changing the semantics of an existing thing is a major version
 bump. So I wonder if the right thing to do is to bump to 2.0, make the
 default in the client 1.10* for now, with deprecation warnings that the
 major version will not be assumed for future client libraries, and move
 on with the understanding that micro versions aren't supposed to break
 users of a particular major version.

 Thoughts?

 * I'm assuming it is possible to make micro version changes to the 1.x API
   as 1.10.1, 1.10.2,etc


 Despite most folks calling this microversions, I have been trying to
 simply call this API version negotiation.

 To your question, no -- the implementations by Nova and Ironic, and the
 proposal that the API-WG has drafted [1], do not actually support
 MAJOR.MINOR.PATCH semantics.

 It has been implemented as a combination of an HTTP request to
 http(s)://server URL/major/resource URI plus a header 
 X-OpenStack-service-API-Version:
 major.minor.


What if we break this assumption? I.e. we call version in URL a family
version (or epoch, using more common terminology)? Then we can have API
versioning completely independent, bump major when needed etc.

Nothing can be done in ENROLL case IMO, but in the future we would make
such change a 3.0 version, and could backport changes to 2.X branch for
some (hopefully short) time.

I'm also really fond of idea of allowing people to request 1.latest,
2.latest, etc in this case, where 2.latest would mean gimme all features
which I can receive without breaking, provided that I'm compatible with 2.x
branch.



 The major version number is duplicated in both the URI and the header,
 though Ironic will error if they do not match. Also, there is no patch or
 micro version.

 So, were we to change the major version in the header, I would expect
 that we also change it in the URL, which means registering a new endpoint
 with Keystone, and, well, all of that.

 -D

 [1] https://review.openstack.org/#/c/187112/


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-07-31 Thread Mike Perez
On Fri, Jul 31, 2015 at 8:56 AM, Joshua Harlow harlo...@outlook.com wrote:
 ...random thought here, skip as needed... in all honesty orchestration
 solutions like mesos
 (http://mesos.apache.org/assets/img/documentation/architecture3.jpg),
 map-reduce solutions like hadoop, stream processing systems like apache
 storm (...), are already using zookeeper and I'm not saying we should just
 use it cause they are, but the likelihood that they just picked it for no
 reason are imho slim.

I'd really like to see focus cross project. I don't want Ceilometer to
depend on Zoo Keeper, Cinder to depend on etcd, etc. This is not ideal
for an operator to have to deploy, learn and maintain each of these
solutions.

I think this is difficult when you consider everyone wants options of
their preferred DLM. If we went this route, we should pick one.

Regardless, I want to know if we really need a DLM. Does Ceilometer
really need a DLM? Does Cinder really need a DLM? Can we just use a
hash ring solution where operators don't even have to know or care
about deploying a DLM and running multiple instances of Cinder manager
just works?

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-ansible-deployment][openstack-ansible] Release review/bug list for tomorrow's meeting

2015-07-31 Thread Jesse Pretorius
On 29 July 2015 at 22:04, Jesse Pretorius jesse.pretor...@gmail.com wrote:

 The following reviews are in-flight and are important for the upcoming
 releases, and therefore there is a need for more reviews and in some cases
 backports once the master patches have landed:
 https://review.openstack.org/#/q/starredby:%22Jesse+Pretorius%22+project:stackforge/os-ansible-deployment,n,z

 The upcoming releases (this weekend) are:

 Kilo: https://launchpad.net/openstack-ansible/+milestone/11.1.0

Unfortunately we have had to postpone the release for 11.1.0 until Monday
as we have missed our deadline for several key patches merging.

The entire available core team agreed that this would be best in the
#openstack-ansible channel. It'd be better for us all to have a bit more
time to review the final list of patches, especially considering several of
them are quite substantial.

We'll revisit this on Monday - please continue to review if you have a
moment to do so before then.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] [os-ansible-deployment] Kilo - Liberty Upgrade Problems

2015-07-31 Thread Jesse Pretorius
On 31 July 2015 at 16:59, Ian Cordasco ian.corda...@rackspace.com wrote:


 So, I'm not proposing that yaprt be updated to forcibly add epochs or
 anything else. What yaprt will generate is exactly what upstream
 specifies. For example, glance would be version 11.0.0, nova would be
 12.0.0, etc.

 Giving a more concrete example (which I should have done in the original
 message):

 If we're upgrading Glance in the containers, we'll be upgrading from

 glance==2015.1.0 (obviously not exactly that, but for the sake of this
 example bear with me)

 To

 glance==11.0.0

 And we'll use the version that yaprt reports to do

 # pip install glance==11.0.0

 So the repositories that we build can have both 2015.1.0 and 11.0.0 in it
 and pip will always install 11.0.0.

 This allows for

 1. In-place upgrades
 2. Reproducibility
 3. Stability
 4. Side-stepping upstream refusing to use epochs


Perfect, this is exactly what I was hoping for.


 Does that make sense?


Perfectly. +1 from me on the approach.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Ironic] A possible solution for HA Active-Active

2015-07-31 Thread Joshua Harlow

Joshua Harlow wrote:

Mike Perez wrote:

On Fri, Jul 31, 2015 at 8:56 AM, Joshua Harlowharlo...@outlook.com
wrote:

...random thought here, skip as needed... in all honesty orchestration
solutions like mesos
(http://mesos.apache.org/assets/img/documentation/architecture3.jpg),
map-reduce solutions like hadoop, stream processing systems like apache
storm (...), are already using zookeeper and I'm not saying we should
just
use it cause they are, but the likelihood that they just picked it
for no
reason are imho slim.


I'd really like to see focus cross project. I don't want Ceilometer to
depend on Zoo Keeper, Cinder to depend on etcd, etc. This is not ideal
for an operator to have to deploy, learn and maintain each of these
solutions.

I think this is difficult when you consider everyone wants options of
their preferred DLM. If we went this route, we should pick one.


+1



Regardless, I want to know if we really need a DLM. Does Ceilometer
really need a DLM? Does Cinder really need a DLM? Can we just use a
hash ring solution where operators don't even have to know or care
about deploying a DLM and running multiple instances of Cinder manager
just works?


All very good questions, although IMHO a hash-ring is just a piece of
the puzzle, and is more equivalent to sharding resources, which yes is
one way to scale as long as each shard never touches anything from the
other shards. If those shards ever start to need to touch anything
shared then u get back into this same situation again for a DLM (and at
that point u really do need the 'distributed' part of DLM, because each
shard is distributed).

And an few (maybe obvious) questions:

- How would re-sharding work?
- If sharding (the hash-ring partitioning) is based on entities
(conductors/other) owning a 'bucket' of resources (ie entity 1 manages
resources A-F, entity 2 manages resources G-M...), what happens if a
entity dies, does some other entity take over that bucket, what happens
if that entity really hasn't 'died' but is just disconnected from the
network (partition tolerance...)? (If the answer is there is a lock on
the resource/s being used by each entity, then u get back into the LM
question).

I'm unsure about how ironic handles these problems (although I believe
they have a hash-ring and still have a locking scheme as well, so maybe
thats there answer for the dual-entities manipulating the same bucket
problem).


Code for some of this, maybe ironic folks can chime-in:

https://github.com/openstack/ironic/blob/2015.1.1/ironic/conductor/task_manager.py#L18 
(using DB as DLM)


Afaik, since ironic built-in a hash-ring and the above task manager 
since the start (or from a very earlier commit) they have better been 
able to accomplish the HA goal, retrofitting stuff on-top of 
nova,cinder,others... is not going to as easy...






--
Mike Perez

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] [upgrade] Cluster Upgrade - FFE status

2015-07-31 Thread Oleg Gelbukh
Team,

I'd like to inform you about status of FF Exception for Cluster Upgrade
feature.

The due date for the exception was Jul 30th. We had 4 patches to merge in
the beginning of the work.

During review it was decided that changes to the core of Nailgun should be
split into separate CR, making it 5 patches. In a course of development we
also hit a glitch in deep layers of networking modules of Nailgun [1]. This
slowed down the development and review process, and as a result we only
merged 1 patch out of 5 by the due date of the FFE.

I would like to ask for 4 more days of FFE, which effectively will move the
due date to Aug 3rd.

[1] https://bugs.launchpad.net/fuel/+bug/1480228

--
Best regards,
Oleg Gelbukh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Let's talk about API versions

2015-07-31 Thread Jim Rollenhagen
On Fri, Jul 31, 2015 at 02:37:52PM -0700, Clint Byrum wrote:
 Excerpts from Sean Dague's message of 2015-07-31 04:14:54 -0700:
  On 07/30/2015 04:58 PM, Devananda van der Veen wrote:
  snip
   Thoughts?
   
   * I'm assuming it is possible to make micro version changes to the
   1.x API
 as 1.10.1, 1.10.2,etc
   
   
   Despite most folks calling this microversions, I have been trying to
   simply call this API version negotiation. 
   
   To your question, no -- the implementations by Nova and Ironic, and the
   proposal that the API-WG has drafted [1], do not actually support
   MAJOR.MINOR.PATCH semantics.
   
   It has been implemented as a combination of an HTTP request to
   http(s)://server URL/major/resource URI plus a
   header X-OpenStack-service-API-Version: major.minor. 
   
   The major version number is duplicated in both the URI and the header,
   though Ironic will error if they do not match. Also, there is no patch
   or micro version.
   
   So, were we to change the major version in the header, I would expect
   that we also change it in the URL, which means registering a new
   endpoint with Keystone, and, well, all of that.
  
  Right, it's important to realize that the microversion mechanism is not
  semver, intentionally. It's inspired by HTTP content negotiation, as
  Deva said. I wrote up a lot of the rational for the model in Nova here,
  which the Ironic model is based off of -
  https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/
  
 
 Thanks Sean, this post was exactly what I needed to understand the
 inspiration behind the current situation.
 
  Ironic is a little different. It's entirely an admin API. And most users
  are going to only talk to an Ironic that they own the deployment
  schedule on. So the multi cloud that you don't own concern might not be
  there. But, it would also be confusing to all users if Ironic goes down
  a different path with microversions, and still calls it the same thing.
  
 
 I think being single-tenant makes the impact of changes different,
 however the solution can be the same. While tools that use Ironic may
 not be out in the wild as much from an operator perspective, there will
 be plenty of tools built to the Ironic API that will want to be
 distributed to users of various versions of Ironic.
 
 It sounds to me like for Ironic, the same assumption should be made as
 in the outlined Jackson the Absent solution. Assume no version is old
 version, and require specifying the new version to get any new behavior.
 
 What is preventing Ironic from embracing that?

So, this is actually how the Ironic API behaves. However, it was at some
point decided that the client should have a more recent default version
(which is the main topic for this thread).

I agree with you; I think this is the best route.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Ironic] A possible solution for HA Active-Active

2015-07-31 Thread Jim Rollenhagen
On Fri, Jul 31, 2015 at 12:47:34PM -0700, Joshua Harlow wrote:
 Joshua Harlow wrote:
 Mike Perez wrote:
 On Fri, Jul 31, 2015 at 8:56 AM, Joshua Harlowharlo...@outlook.com
 wrote:
 ...random thought here, skip as needed... in all honesty orchestration
 solutions like mesos
 (http://mesos.apache.org/assets/img/documentation/architecture3.jpg),
 map-reduce solutions like hadoop, stream processing systems like apache
 storm (...), are already using zookeeper and I'm not saying we should
 just
 use it cause they are, but the likelihood that they just picked it
 for no
 reason are imho slim.
 
 I'd really like to see focus cross project. I don't want Ceilometer to
 depend on Zoo Keeper, Cinder to depend on etcd, etc. This is not ideal
 for an operator to have to deploy, learn and maintain each of these
 solutions.
 
 I think this is difficult when you consider everyone wants options of
 their preferred DLM. If we went this route, we should pick one.
 
 +1
 
 
 Regardless, I want to know if we really need a DLM. Does Ceilometer
 really need a DLM? Does Cinder really need a DLM? Can we just use a
 hash ring solution where operators don't even have to know or care
 about deploying a DLM and running multiple instances of Cinder manager
 just works?
 
 All very good questions, although IMHO a hash-ring is just a piece of
 the puzzle, and is more equivalent to sharding resources, which yes is
 one way to scale as long as each shard never touches anything from the
 other shards. If those shards ever start to need to touch anything
 shared then u get back into this same situation again for a DLM (and at
 that point u really do need the 'distributed' part of DLM, because each
 shard is distributed).
 
 And an few (maybe obvious) questions:
 
 - How would re-sharding work?
 - If sharding (the hash-ring partitioning) is based on entities
 (conductors/other) owning a 'bucket' of resources (ie entity 1 manages
 resources A-F, entity 2 manages resources G-M...), what happens if a
 entity dies, does some other entity take over that bucket, what happens
 if that entity really hasn't 'died' but is just disconnected from the
 network (partition tolerance...)? (If the answer is there is a lock on
 the resource/s being used by each entity, then u get back into the LM
 question).
 
 I'm unsure about how ironic handles these problems (although I believe
 they have a hash-ring and still have a locking scheme as well, so maybe
 thats there answer for the dual-entities manipulating the same bucket
 problem).
 
 Code for some of this, maybe ironic folks can chime-in:
 
 https://github.com/openstack/ironic/blob/2015.1.1/ironic/conductor/task_manager.py#L18
 (using DB as DLM)
 
 Afaik, since ironic built-in a hash-ring and the above task manager since
 the start (or from a very earlier commit) they have better been able to
 accomplish the HA goal, retrofitting stuff on-top of nova,cinder,others...
 is not going to as easy...

I would still like to find time, one day, to use etcd or zookeeper as
our DLM in Ironic. Not having TTLs etc has been painful for us, though
we've mostly worked around it by now.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PATCH v4 2/5] ovn: Add bridge mappings to ovn-controller.

2015-07-31 Thread Russell Bryant
I found a couple of problems in this one.  I'll fix it in v5 in a few
minutes.

On 07/31/2015 10:52 AM, Russell Bryant wrote:
 Add a new OVN configuration entry in the Open_vSwitch database called
 ovn-bridge-mappings.  This allows the configuration of mappings
 between a physical network name and an OVS bridge that provides
 connectivity to that network.
 
 For example, if you wanted to configure physnet1 to map to br-eth0
 and physnet2 to map to br-eth1, the configuration would be:
 
   $ ovs-vsctl set open . \
external-ids:ovn-bridge-mappings=physnet1:br-eth0,physnet2:br-eth1
 
 Patch ports between these bridges and the integration bridge are
 automatically created and also removed if necessary when the
 configuration changes.
 
 Documentation for this configuration value is introduced in a later
 patch that makes use of this by introducing a localnet logical port
 type.
 
 Signed-off-by: Russell Bryant rbry...@redhat.com
 +static void
 +create_patch_port(struct controller_ctx *ctx,
 +  const char *network,
 +  const struct ovsrec_bridge *b1,
 +  const struct ovsrec_bridge *b2)
 +{
 +struct ovsrec_interface *iface;
 +struct ovsrec_port *port, **ports;
 +size_t i;
 +char *port_name;
 +
 +port_name = xasprintf(patch-%s-to-%s, b1-name, b2-name);
 +
 +ovsdb_idl_txn_add_comment(ctx-ovs_idl_txn,
 +ovn-controller: creating patch port '%s' from '%s' to '%s',
 +port_name, b1-name, b2-name);

This will blow up if ctx-ovs_idl_txn is NULL, which happens under
normal cirumstances.


 @@ -271,6 +495,10 @@ main(int argc, char *argv[])
  const struct ovsrec_bridge *br_int = get_br_int(ctx.ovs_idl);
  const char *chassis_id = get_chassis_id(ctx.ovs_idl);
  
 +/* Map bridges to local nets from ovn-bridge-mappings */
 +struct smap bridge_mappings = SMAP_INITIALIZER(bridge_mappings);
 +init_bridge_mappings(ctx, br_int, bridge_mappings);
 +

This should make sure br_int isn't NULL first.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-07-31 Thread Clint Byrum
Excerpts from Mike Perez's message of 2015-07-31 10:40:04 -0700:
 On Fri, Jul 31, 2015 at 8:56 AM, Joshua Harlow harlo...@outlook.com wrote:
  ...random thought here, skip as needed... in all honesty orchestration
  solutions like mesos
  (http://mesos.apache.org/assets/img/documentation/architecture3.jpg),
  map-reduce solutions like hadoop, stream processing systems like apache
  storm (...), are already using zookeeper and I'm not saying we should just
  use it cause they are, but the likelihood that they just picked it for no
  reason are imho slim.
 
 I'd really like to see focus cross project. I don't want Ceilometer to
 depend on Zoo Keeper, Cinder to depend on etcd, etc. This is not ideal
 for an operator to have to deploy, learn and maintain each of these
 solutions.
 
 I think this is difficult when you consider everyone wants options of
 their preferred DLM. If we went this route, we should pick one.
 
 Regardless, I want to know if we really need a DLM. Does Ceilometer
 really need a DLM? Does Cinder really need a DLM? Can we just use a
 hash ring solution where operators don't even have to know or care
 about deploying a DLM and running multiple instances of Cinder manager
 just works?
 

So in the Ironic case, if two conductors decide they both own one IPMI
controller, _chaos_ can ensue. They may, at different times, read that
the power is up, or down, and issue power control commands that may take
many seconds, and thus on the next status run of the other command may
cause the conductor to react by reversing, and they'll just fight over
the node in a tug-o-war fashion.

Oh wait, except, thats not true. Instead, they use the database as a
locking mechanism, and AFAIK, no nodes have been torn limb from limb by
two conductors thus far.

But, a DLM would be more efficient, and actually simplify failure
recovery for Ironic's operators. The database locks suffer from being a
little too conservative, and sometimes you just have to go into the DB
and delete a lock after something explodes (this was true 6 months ago,
it may have better automation sometimes now, I don't know).

Anyway, I'm all for the simplest possible solution. But, don't make it
_too_ simple.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Let's talk about API versions

2015-07-31 Thread Clint Byrum
Excerpts from Sean Dague's message of 2015-07-31 04:14:54 -0700:
 On 07/30/2015 04:58 PM, Devananda van der Veen wrote:
 snip
  Thoughts?
  
  * I'm assuming it is possible to make micro version changes to the
  1.x API
as 1.10.1, 1.10.2,etc
  
  
  Despite most folks calling this microversions, I have been trying to
  simply call this API version negotiation. 
  
  To your question, no -- the implementations by Nova and Ironic, and the
  proposal that the API-WG has drafted [1], do not actually support
  MAJOR.MINOR.PATCH semantics.
  
  It has been implemented as a combination of an HTTP request to
  http(s)://server URL/major/resource URI plus a
  header X-OpenStack-service-API-Version: major.minor. 
  
  The major version number is duplicated in both the URI and the header,
  though Ironic will error if they do not match. Also, there is no patch
  or micro version.
  
  So, were we to change the major version in the header, I would expect
  that we also change it in the URL, which means registering a new
  endpoint with Keystone, and, well, all of that.
 
 Right, it's important to realize that the microversion mechanism is not
 semver, intentionally. It's inspired by HTTP content negotiation, as
 Deva said. I wrote up a lot of the rational for the model in Nova here,
 which the Ironic model is based off of -
 https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/
 

Thanks Sean, this post was exactly what I needed to understand the
inspiration behind the current situation.

 Ironic is a little different. It's entirely an admin API. And most users
 are going to only talk to an Ironic that they own the deployment
 schedule on. So the multi cloud that you don't own concern might not be
 there. But, it would also be confusing to all users if Ironic goes down
 a different path with microversions, and still calls it the same thing.
 

I think being single-tenant makes the impact of changes different,
however the solution can be the same. While tools that use Ironic may
not be out in the wild as much from an operator perspective, there will
be plenty of tools built to the Ironic API that will want to be
distributed to users of various versions of Ironic.

It sounds to me like for Ironic, the same assumption should be made as
in the outlined Jackson the Absent solution. Assume no version is old
version, and require specifying the new version to get any new behavior.

What is preventing Ironic from embracing that?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to use the log server in CI ?

2015-07-31 Thread Tang Chen

Hi Abhishek,

I found out what was wrong.

I didn't configure publisher in my jobs..
Sorry for the trouble.

But I do think the document on github is not suitable for any beginner.
So I think I will post some patches to provide more info for those who 
use this tool for the very first time

when I finish the work on my hand.

And about the nodepool you asked last time, I'm working on it.
But due to the proxy in my company, I  have to solve lots of trivial 
problems first.


Thanks.


On 07/31/2015 09:57 AM, Tang Chen wrote:

Hi Abhishek,


On 07/30/2015 04:54 PM, Abhishek Shrivastava wrote:

Hi Tang,

After completing the logServer installation the logs will go to your 
machine automatically after each build run.


I don't quite understand here. I didn't configure anything about 
jenkins in install_log_server.sh except the public SSH key.
How could jenkins find and connect to the log server ? Or how could 
the log server find the jenkins ?


Now, I can access to my log server index page. But when I run a build, 
no new log is put into /srv/static/logs/.


Thanks.





On Thu, Jul 30, 2015 at 2:19 PM, Tang Chen tangc...@cn.fujitsu.com 
mailto:tangc...@cn.fujitsu.com wrote:


Hi Asselin, Abhishek,

Sorry, it is about the CI again.

I run install_log_server.sh to setup a log server.

I setup the log server on the same machine with my Jenkins
Master, and configured it like this:
$DOMAIN=localhost
$|JENKINS_SSH_PUBLIC_KEY| = path to my ssh key

The script completed. But I don't know how to use the log server.

Here are my questions:

1. Is the log server able to be on the same machine with Jenkins
Master ?
I think maybe the apache in the log server conflicts with the
jenkins server.


The answer to your question is no, as the logs generated
each time will become large in size so it is recommended to use 
logServer on a separate machine having a public IP.



2. Is the log server able to upload the logs to Gerrit
automatically ?
Or it is just a server for you to view the logs ?


The logs is not uploaded to Gerrit, only success or failure is 
reported to Gerrit. Also, when you click on the job in the gerrit 
with either of the message, you will be redirected to the logServer page.



I raised an issue on github.  You can also discuss this on github
if you like.
(https://github.com/rasselin/os-ext-testing/issues/19)

I also asked about this in IRC #openstack-infra, but it seems
that very few people are using os-ext-testing.

Thanks.




--
*
*
*Thanks  Regards,
*
*Abhishek*
/_Cloudbyte Inc. http://www.cloudbyte.com_/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposing Kanagaraj Manickam and Ethan Lynn for heat-core

2015-07-31 Thread Qiming Teng
+1 from qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? 31 July 2015

2015-07-31 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi everyone,

This week, like last, was all about the RST conversion. With just a 48
hour long sprint, the Install Guide is almost complete! A hearty
congratulations to everyone who contributed. One of the interesting
things about doing all these conversions is the amount of conversation
it has started about our conventions, and I'm pleased to see that we're
really starting to get a feel for how RST can work for us, and what we
can demand of it. It's also great to see some related changes to the
sphinx theme that handles how we display our shiny RST books.

== Progress towards Liberty ==

75 days to go!

* RST conversion:
** Install Guide: Conversion is nearly done, sign up here:
https://wiki.openstack.org/wiki/Documentation/Migrate#Installation_Guide_Migration
** Cloud Admin Guide: is nearly done. Sign up here:
https://wiki.openstack.org/wiki/Documentation/Migrate#Cloud_Admin_Guide_Migration
** HA Guide: is also nearly done. Get in touch with Meg or Matt:
https://wiki.openstack.org/wiki/Documentation/HA_Guide_Update
** Security Guide: Conversion is now underway, sign up here:
https://etherpad.openstack.org/p/sec-guide-rst

* User Guides information architecture overhaul
** Waiting on the RST conversion of the Cloud Admin Guide to be complete

* Greater focus on helping out devs with docs in their repo
** Work has stalled on the Ironic docs, we need to pick this up again.
Contact me if you want to know more, or are willing to help out.

* Improve how we communicate with and support our corporate contributors
** I have been brainstorming ideas with Foundation, watch this space!

* Improve communication with Docs Liaisons
** I'm very pleased to see liaisons getting more involved in our bugs
and reviews. Keep up the good work!

* Clearing out old bugs
** Thanks to Brian for picking up one of the spotlight bugs from last
week. Three new bugs to look at this week.

== RST Migration ==

The next books we are focusing on for RST conversion are the Install
Guide, Cloud Admin Guide, HA Guide, and the Security Guide. If you would
like to assist, please get in touch with the appropriate speciality team:

* Install Guide:
** Contact Karin Levenstein karin.levenst...@rackspace.com
** Sign up here:
https://wiki.openstack.org/wiki/Documentation/Migrate#Installation_Guide_Migration

* Cloud Admin Guide:
** Contact Brian Moss kallimac...@gmail.com  Joseph Robinson
joseph.r.em...@gmail.com
** Sign up to help out here:
https://wiki.openstack.org/wiki/Documentation/Migrate#Cloud_Admin_Guide_Migration

* HA Guide
** Contact Meg McRoberts dreidellh...@yahoo.com or Matt Griffin
m...@mattgriffin.com
** Blueprint:
https://blueprints.launchpad.net/openstack-manuals/+spec/improve-ha-guide

* Security Guide
** Contact Nathaniel Dillon nathaniel.dil...@hp.com
** Info: https://etherpad.openstack.org/p/sec-guide-rst

For books that are now being converted, don't forget that any change you
make to the XML must also be made to the RST version until conversion is
complete. Our lovely team of cores will be keeping an eye out to make
sure loose changes to XML don't pass the gate, but try to help them out
by pointing out both patches in your reviews.

== Docs Tools ==

Thanks to Dave for picking up a bug this week and adding a navigation
bar to the sidepane of the docs theme for our RST books. We should be
releasing an updated docs theme shortly so you can play with this new
functionality. In the meantime, if you have suggestions for the docs
theme, raise the issue on the docs mailing list, or go right ahead and
create a bug for it.

== APAC Docs Swarm ==

The APAC team have been working on holding another doc swarm, this time
working on the Architecture Design Guide. It's to be held at the Red Hat
office in Brisbane, on 13-14 August. Check out
http://openstack-swarm.rhcloud.com/ for all the info.

== Doc team meeting ==

The US meeting got cancelled this week.

The next meetings are:
APAC: Wednesday 5 August, 00:30:00 UTC
US: Wednesday 12 August, 14:00:00 UTC


Please go ahead and add any agenda items to the meeting page here:
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

== Spotlight bugs for this week ==

Three new spotlight bugs for you to sink your teeth into:

https://bugs.launchpad.net/openstack-manuals/+bug/1257018 VPNaaS isn't
documented in cloud admin

https://bugs.launchpad.net/openstack-manuals/+bug/1257656 VMware: add
support for VM diagnostics

https://bugs.launchpad.net/openstack-manuals/+bug/1261969 Document nova
server package

- --

Remember, if you have content you would like to add to this newsletter,
or you would like to be added to the distribution list, please email me
directly at openst...@lanabrindley.com, or visit:
https://wiki.openstack.org/w/index.php?title=Documentation/WhatsUpDoc

Keep on doc'ing!

Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-

Re: [openstack-dev] Proposing Kanagaraj Manickam and Ethan Lynn forheat-core

2015-07-31 Thread Thomas Spatzier
+1 on both from me!

Cheers,
Thomas

 From: Steve Baker sba...@redhat.com
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
 Date: 31/07/2015 06:37
 Subject: [openstack-dev] Proposing Kanagaraj Manickam and Ethan Lynn
 for heat-core

 I believe the heat project would benefit from Kanagaraj Manickam and
 Ethan Lynn having the ability to approve heat changes.

 Their reviews are valuable[1][2] and numerous[3], and both have been
 submitting useful commits in a variety of areas in the heat tree.

 Heat cores, please express your approval with a +1 / -1.

 [1] http://stackalytics.com/?user_id=kanagaraj-manickammetric=marks
 [2] http://stackalytics.com/?user_id=ethanlynnmetric=marks
 [3] http://stackalytics.com/report/contribution/heat-group/90


__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova, cinder, neutron] quota-update tenant-name bug

2015-07-31 Thread Salvatore Orlando
More comments inline.

Salvatore

On 31 July 2015 at 01:47, Kevin Benton blak...@gmail.com wrote:

 The issue is that the Neutron credentials might not have privileges to
 resolve the name to a UUID. I suppose we could just fail in that case.


As quota-update is usually restricted to admin users this should not be a
problem, unless the deployment uses per-service admin users.



 Let's see what happens with the nova spec Salvatore linked.


That spec seems stuck to me. I think the reason is lack of reasons for
raising its priority.



 On Thu, Jul 30, 2015 at 4:33 PM, Fox, Kevin M kevin@pnnl.gov wrote:

 If the quota update resolved the name to a uuid before it updated the
 quota by uuid, I think it would resolve the issues? You'd just have to
 check if keystone was in use, and then do the extra resolve on update. I
 think the rest of the stuff can just remain using uuids?


Once you accept that it's not a big deal to do a round trip to keystone,
then we can do whatever we want. If there is value from a API usability
perspective we'll just do that.
If the issue is instead more the CLI UX, I would consider doing resolving
the name (and possibly validating the tenant uuid) in python-neutronclient.

Also, I've checked the docs [1] and [2] and neutron quota-update is not
supposed to accept tenant name - so probably the claim made in the initial
post on this thread did not apply to neutron after all.


 Thanks,
 Kevin
 --
 *From:* Kevin Benton [blak...@gmail.com]
 *Sent:* Thursday, July 30, 2015 4:22 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [nova, cinder, neutron] quota-update
 tenant-name bug

 Good point. Unfortunately the other issues are going to be the hard part
 to deal with. I probably shouldn't have brought up performance as a
 complaint at this stage. :)

 On Thu, Jul 30, 2015 at 3:26 AM, Fox, Kevin M kevin@pnnl.gov wrote:

 Can a non admin update quotas? Quota updates are rare. Performance of
 them can take the hit.

 Thanks,
 Kevin

 --
 *From:* Kevin Benton
 *Sent:* Wednesday, July 29, 2015 10:44:49 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [nova, cinder, neutron] quota-update
 tenant-name bug

 Dev lessons learned: we need to validate better our inputs and refuse
 to update a tenant-id that does not exist.

 This is something that has come up in Neutron discussions before. There
 are two issues here:
 1. Performance: it will require a round-trip to Keystone on every
 request.
 2. If the Neutron keystone user in unprivileged and the request context
 is unprivileged, we might not actually be allowed to tell if the tenant
 exists.

 The first we can deal with, but the second is going to be an issue that
 we might not be able to get around.

 How about as a temporary solution, we just confirm that the input is a
 UUID so names don't get used?

 On Wed, Jul 29, 2015 at 10:19 PM, Bruno L teolupus@gmail.com
 wrote:

 This is probably affecting other people as well, so hopefully message
 will avoid some headaches.

 [nova,cinder,neutron] will allow you to do a quota-update using the
 tenant-name (instead of tenant-id). They will also allow you to do a
 quota-show tenant-name and get the expected values back.

 Then you go to the tenant and end up surprised that the quotas have not
 been applied and you can still do things you were not supposed to.

 It turns out that [nova,cinder,neutron] just created an entry on the
 quota table, inserting the tenant-name on the tenant-id field.

 Surprise, surprise!

 Ops lessons learned: use the tenant-id!

 Dev lessons learned: we need to validate better our inputs and refuse
 to update a tenant-id that does not exist.

 I have documented this behaviour on
 https://bugs.launchpad.net/neutron/+bug/1399065 and
 https://bugs.launchpad.net/neutron/+bug/1317515. I can reproduce it in
 IceHouse.

 Could someone please confirm if this is still the case on master? If
 not, which version of OpenStack addressed that?

 Thanks,
 Bruno


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] Proposing Kanagaraj Manickam and Ethan Lynn for heat-core

2015-07-31 Thread Sergey Kraynev
+1 from me for both.

Regards,
Sergey.

On 31 July 2015 at 07:35, Steve Baker sba...@redhat.com wrote:

 I believe the heat project would benefit from Kanagaraj Manickam and Ethan
 Lynn having the ability to approve heat changes.

 Their reviews are valuable[1][2] and numerous[3], and both have been
 submitting useful commits in a variety of areas in the heat tree.

 Heat cores, please express your approval with a +1 / -1.

 [1] http://stackalytics.com/?user_id=kanagaraj-manickammetric=marks
 [2] http://stackalytics.com/?user_id=ethanlynnmetric=marks
 [3] http://stackalytics.com/report/contribution/heat-group/90

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] setting minimum version of setuptools in setup.py

2015-07-31 Thread Robert Collins
On 30 July 2015 at 05:27, Robert Collins robe...@robertcollins.net wrote:
 Similar to pbr, we have a minimum version of setuptools required to
 consistently install things in OpenStack. Right now thats 17.1.

 However, we don't declare a setup_requires version for it.

 I think we should.

 setuptools can't self-upgrade, and we don't have declarative deps yet,
 so one reaction I expect here is 'how will this help'.

 The problem lies in the failure modes. With no dependency declared,
 setuptools will try and *silently fail*, or try and fail with this one
 weird error - that doesn't say anything about 'setuptools 3.3. cannot
 handle PEP 426 version markers'.

 If we set a minimum (but not a maximum) setuptools version as a
 setup_requires, I think we'll signal our actual dependencies to
 redistributors, and folk consuiming python packages, in a much more
 direct fashion. They'll still have to recover manually, but thats ok
 IMO. As long as we don't set upper bounds, we won't deadlock ourselves
 like we did in the past.

These are the errors we get with this when the version present is too-old:
Firstly, 0.6c11 which detects the error and reports it.
Then 3.3 which attempts to upgrade setuptools just-in-time but of
course the old setuptools code is what executes, and so the error is
confusing :/. But still better than no hint at all: the presence of
the setuptools upgrade is a signal.


$ pip install setuptools==0.6c11
...
Successfully installed setuptools-0.6rc11
$ pip install .
Processing /home/robertc/work/mock
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
  File string, line 20, in module
  File /tmp/pip-wnBxi2-build/setup.py, line 6, in module
pbr=True)
  File /usr/lib/python2.7/distutils/core.py, line 111, in setup
_setup_distribution = dist = klass(attrs)
  File 
/home/robertc/.virtualenvs/test/local/lib/python2.7/site-packages/setuptools/dist.py,
line 260, in __init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
  File 
/home/robertc/.virtualenvs/test/local/lib/python2.7/site-packages/setuptools/dist.py,
line 284, in fetch_build_eggs
parse_requirements(requires), installer=self.fetch_build_egg
  File 
/home/robertc/.virtualenvs/test/local/lib/python2.7/site-packages/pkg_resources.py,
line 569, in resolve
raise VersionConflict(dist,req) # XXX put more info here
pkg_resources.VersionConflict: (setuptools 0.6c11
(/home/robertc/.virtualenvs/test/lib/python2.7/site-packages),
Requirement.parse('setuptools=17.1'))


Command python setup.py egg_info failed with error code 1 in
/tmp/pip-wnBxi2-build




$ pip install setuptools==3.3
Collecting setuptools==3.3
  Downloading setuptools-3.3-py2.py3-none-any.whl (545kB)
100% || 548kB 674kB/s
Installing collected packages: setuptools
  Found existing installation: setuptools 18.0.1
Uninstalling setuptools-18.0.1:
  Successfully uninstalled setuptools-18.0.1
Successfully installed setuptools-3.3
$ pip install .
Processing /home/robertc/work/mock
Complete output from command python setup.py egg_info:

Installed /tmp/pip-Grkk9a-build/setuptools-18.0.1-py2.7.egg
[pbr] Generating ChangeLog
error in setup command: Invalid environment marker:
(python_version3.3 and python_version=3)


Command python setup.py egg_info failed with error code 1 in
/tmp/pip-Grkk9a-build

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] CI System is broken

2015-07-31 Thread Gareth
Could this issue be fixed today?

Btw, is it possible to design a special mode for gate/zuul? If ops switch
to that mode, all new gerrit event can't trigger any jenkins job.

On Thu, Jul 30, 2015 at 9:13 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-07-30 10:19:20 +0200 (+0200), Andreas Jaeger wrote:
  Joshua just restarted Zuul and is currently requeueing the jobs
  that were in the system.
 [...]

 Also we've got some additional debugging added prior to this most
 recent restart which should assist in narrowing down the cause
 if/when it happens again.
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Gareth

*Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
*OpenStack contributor, kun_huang@freenode*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-07-31 Thread Joshua Harlow

Monty Taylor wrote:

On 08/01/2015 03:40 AM, Mike Perez wrote:

On Fri, Jul 31, 2015 at 8:56 AM, Joshua Harlowharlo...@outlook.com  wrote:

...random thought here, skip as needed... in all honesty orchestration
solutions like mesos
(http://mesos.apache.org/assets/img/documentation/architecture3.jpg),
map-reduce solutions like hadoop, stream processing systems like apache
storm (...), are already using zookeeper and I'm not saying we should just
use it cause they are, but the likelihood that they just picked it for no
reason are imho slim.

I'd really like to see focus cross project. I don't want Ceilometer to
depend on Zoo Keeper, Cinder to depend on etcd, etc. This is not ideal
for an operator to have to deploy, learn and maintain each of these
solutions.

I think this is difficult when you consider everyone wants options of
their preferred DLM. If we went this route, we should pick one.

Regardless, I want to know if we really need a DLM. Does Ceilometer
really need a DLM? Does Cinder really need a DLM? Can we just use a
hash ring solution where operators don't even have to know or care
about deploying a DLM and running multiple instances of Cinder manager
just works?


I'd like to take that one step further and say that we should also look
holistically at the other things that such technology are often used for
in distributed systems and see if, in addition to Does Cinder need a
DLM - ask does Cinder need service discover and does Cinder need
distributed KV store and does anyone else?

Adding something like zookeeper or etcd or consul has the potential to
allow us to design an OpenStack that works better. Adding all of them in
an ad-hoc and uncoordinated manner is a bit sledgehammery.

The Java community uses zookeeper a lot
The container orchestration community seem to all love etcd
I hear tell that there a bunch of ops people who are in love with consul

I'd suggest we look at more than lock management.


Oh I very much agree, but gotta start somewhere :)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] till when must code land to make it in liberty

2015-07-31 Thread Paul Carver

On 7/31/2015 9:47 AM, Kyle Mestery wrote:


However, it's reasonable to assume the later you propose your RFE bug, the
less of a chance it has of making it. We do enforce the Feature Freeze [2],
which is the week of August 31 [3]. Thus, effectively you have 4 weeks to
submit patches for new features.



Does the feature freeze apply to big tent work? I certainly think we 
should try to stick as close to Neutron process as possible, but I'm 
wondering if we need to consider August 31 a hard deadline for the 
networking-sfc work.


I suspect we won't be feature complete by the 31st, we will probably 
need to work well into September in order to ensure that we have 
something with all the necessary parts working.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-07-31 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Mike Perez's message of 2015-07-31 10:40:04 -0700:

On Fri, Jul 31, 2015 at 8:56 AM, Joshua Harlowharlo...@outlook.com  wrote:

...random thought here, skip as needed... in all honesty orchestration
solutions like mesos
(http://mesos.apache.org/assets/img/documentation/architecture3.jpg),
map-reduce solutions like hadoop, stream processing systems like apache
storm (...), are already using zookeeper and I'm not saying we should just
use it cause they are, but the likelihood that they just picked it for no
reason are imho slim.

I'd really like to see focus cross project. I don't want Ceilometer to
depend on Zoo Keeper, Cinder to depend on etcd, etc. This is not ideal
for an operator to have to deploy, learn and maintain each of these
solutions.

I think this is difficult when you consider everyone wants options of
their preferred DLM. If we went this route, we should pick one.

Regardless, I want to know if we really need a DLM. Does Ceilometer
really need a DLM? Does Cinder really need a DLM? Can we just use a
hash ring solution where operators don't even have to know or care
about deploying a DLM and running multiple instances of Cinder manager
just works?



So in the Ironic case, if two conductors decide they both own one IPMI
controller, _chaos_ can ensue. They may, at different times, read that
the power is up, or down, and issue power control commands that may take
many seconds, and thus on the next status run of the other command may
cause the conductor to react by reversing, and they'll just fight over
the node in a tug-o-war fashion.

Oh wait, except, thats not true. Instead, they use the database as a
locking mechanism, and AFAIK, no nodes have been torn limb from limb by
two conductors thus far.

But, a DLM would be more efficient, and actually simplify failure
recovery for Ironic's operators. The database locks suffer from being a
little too conservative, and sometimes you just have to go into the DB
and delete a lock after something explodes (this was true 6 months ago,
it may have better automation sometimes now, I don't know).



A point of data, using kazoo, and zk-shell (python library and python 
zookeeper shell like interface), just to show how much introspection can 
be done with zookeeper when a kazoo lock is created (tooz locks when 
used with zookeeper use this same/similar code).


(session #1)

 from kazoo import client
 c = client.KazooClient()
 c.start()
 lk = c.Lock()
 lk = c.Lock('/resourceX')
 lk.acquire()
True

(session #2)

$ zk-shell
Welcome to zk-shell (1.1.0)
(DISCONNECTED) / connect
(DISCONNECTED) / connect localhost:2181
(CLOSED) /
(CONNECTED) / ls /resourceX
75ef011db92a44bfabf5dbf25fe2965c__lock__00
(CONNECTED) / stat 
/resourceX/75ef011db92a44bfabf5dbf25fe2965c__lock__00

Stat(
  czxid=8103
  mzxid=8103
  ctime=1438383904513
  mtime=1438383904513
  version=0
  cversion=0
  aversion=0
  ephemeralOwner=0x14ed0a76f850002
  dataLength=0
  numChildren=0
  pzxid=8103
)
(CONNECTED) / stat /resourceX/
Stat(
  czxid=8102
  mzxid=8102
  ctime=1438383904494
  mtime=1438383904494
  version=0
  cversion=1
  aversion=0
  ephemeralOwner=0x0
  dataLength=0
  numChildren=1
  pzxid=8103
)

### back to session #1 lk.release() lock in first session

(CONNECTED) / ls /resourceX/

(CONNECTED) /

The above shows creation times, who is waiting on the lock, modification 
times, the owner Anyways I digress, if anyone really wants to know 
more about zookeeper let me know or drop into the #zookeeper channel on 
freenode (I'm one of the core maintainers of kazoo).


-Josh


Anyway, I'm all for the simplest possible solution. But, don't make it
_too_ simple.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-07-31 Thread Monty Taylor
On 08/01/2015 03:40 AM, Mike Perez wrote:
 On Fri, Jul 31, 2015 at 8:56 AM, Joshua Harlow harlo...@outlook.com wrote:
 ...random thought here, skip as needed... in all honesty orchestration
 solutions like mesos
 (http://mesos.apache.org/assets/img/documentation/architecture3.jpg),
 map-reduce solutions like hadoop, stream processing systems like apache
 storm (...), are already using zookeeper and I'm not saying we should just
 use it cause they are, but the likelihood that they just picked it for no
 reason are imho slim.
 
 I'd really like to see focus cross project. I don't want Ceilometer to
 depend on Zoo Keeper, Cinder to depend on etcd, etc. This is not ideal
 for an operator to have to deploy, learn and maintain each of these
 solutions.
 
 I think this is difficult when you consider everyone wants options of
 their preferred DLM. If we went this route, we should pick one.
 
 Regardless, I want to know if we really need a DLM. Does Ceilometer
 really need a DLM? Does Cinder really need a DLM? Can we just use a
 hash ring solution where operators don't even have to know or care
 about deploying a DLM and running multiple instances of Cinder manager
 just works?

I'd like to take that one step further and say that we should also look
holistically at the other things that such technology are often used for
in distributed systems and see if, in addition to Does Cinder need a
DLM - ask does Cinder need service discover and does Cinder need
distributed KV store and does anyone else?

Adding something like zookeeper or etcd or consul has the potential to
allow us to design an OpenStack that works better. Adding all of them in
an ad-hoc and uncoordinated manner is a bit sledgehammery.

The Java community uses zookeeper a lot
The container orchestration community seem to all love etcd
I hear tell that there a bunch of ops people who are in love with consul

I'd suggest we look at more than lock management.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally][Meeting][Agenda]

2015-07-31 Thread Roman Vasilets
Hi, its a friendly reminder that if you what to discuss some topics at
Rally meetings, please add you topic to our Meeting agenda
https://wiki.openstack.org/wiki/Meetings/Rally#Agenda. Don't forget to
specify by whom led this topic. Add some information about topic(links,
etc.) Thank you for your attention.

- Best regards, Vasilets Roman.

On Thu, Jul 23, 2015 at 4:26 PM, Roman Vasilets rvasil...@mirantis.com
wrote:

 Hi, its a friendly reminder that if you what to discuss some topics at
 Rally meetings, please add you topic to our Meeting agenda
 https://wiki.openstack.org/wiki/Meetings/Rally#Agenda. Don't forget to
 specify by whom led this topic. Add some information about topic(links,
 etc.) Thank you for your attention.

 - Best regards, Vasilets Roman.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: Proposing Kanagaraj Manickam and Ethan Lynn for heat-core

2015-07-31 Thread Pavlo Shchelokovskyy
+1 for both from me,

Best regards,

On Fri, Jul 31, 2015, 11:20 Huangtianhua huangtian...@huawei.com wrote:

 +1 :)

 -邮件原件-
 发件人: Steve Baker [mailto:sba...@redhat.com]
 发送时间: 2015年7月31日 12:36
 收件人: OpenStack Development Mailing List
 主题: [openstack-dev] Proposing Kanagaraj Manickam and Ethan Lynn for
 heat-core

 I believe the heat project would benefit from Kanagaraj Manickam and Ethan
 Lynn having the ability to approve heat changes.

 Their reviews are valuable[1][2] and numerous[3], and both have been
 submitting useful commits in a variety of areas in the heat tree.

 Heat cores, please express your approval with a +1 / -1.

 [1] http://stackalytics.com/?user_id=kanagaraj-manickammetric=marks
 [2] http://stackalytics.com/?user_id=ethanlynnmetric=marks
 [3] http://stackalytics.com/report/contribution/heat-group/90

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-07-31 Thread Mike Perez
On Mon, Jul 27, 2015 at 12:35 PM, Gorka Eguileor gegui...@redhat.com wrote:
 I know we've all been looking at the HA Active-Active problem in Cinder
 and trying our best to figure out possible solutions to the different
 issues, and since current plan is going to take a while (because it
 requires that we finish first fixing Cinder-Nova interactions), I've been
 looking at alternatives that allow Active-Active configurations without
 needing to wait for those changes to take effect.

 And I think I have found a possible solution, but since the HA A-A
 problem has a lot of moving parts I ended up upgrading my initial
 Etherpad notes to a post [1].

 Even if we decide that this is not the way to go, which we'll probably
 do, I still think that the post brings a little clarity on all the
 moving parts of the problem, even some that are not reflected on our
 Etherpad [2], and it can help us not miss anything when deciding on a
 different solution.

Based on IRC conversations in the Cinder room and hearing people's
opinions in the spec reviews, I'm not convinced the complexity that a
distributed lock manager adds to Cinder for both developers and the
operators who ultimately are going to have to learn to maintain things
like Zoo Keeper as a result is worth it.

**Key point**: We're not scaling Cinder itself, it's about scaling to
avoid build up of operations from the storage backend solutions
themselves.

Whatever people think ZooKeeper scaling level is going to accomplish
is not even a question. We don't need it, because Cinder isn't as
complex as people are making it.

I'd like to think the Cinder team is a great in recognizing potential
cross project initiatives. Look at what Thang Pham has done with
Nova's version object solution. He made a generic solution into an
Oslo solution for all, and Cinder is using it. That was awesome, and
people really appreciated that there was a focus for other projects to
get better, not just Cinder.

Have people consider Ironic's hash ring solution? The project Akanda
is now adopting it [1], and I think it might have potential. I'd
appreciate it if interested parties could have this evaluated before
the Cinder midcycle sprint next week, to be ready for discussion.

[1] - https://review.openstack.org/#/c/195366/

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] DAL implementation

2015-07-31 Thread joehuang
Hi, Vega,

Thanks for your response.  I think we can include the session in the context.

Best Regards
Chaoyi Huang ( Joe Huang )

From: Vega Cai [mailto:luckyveg...@gmail.com]
Sent: Friday, July 31, 2015 9:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle] DAL implementation

Hi Joe,

I think one independent job is finished in one session. The job is responsible 
to start  a session, query or modify database then end the session. Like port 
creating job in neutron, it starts a session, queries network, adds port, 
allocates ip address, then ends the session at the end.

BR
Zhiyuan

On 31 July 2015 at 09:05, joehuang 
joehu...@huawei.commailto:joehu...@huawei.com wrote:
Hi, Vega,

Multiple DB access will be a use case for one session , especially for DB data 
insertion, multiple table will be involved. To embed the session in Context, 
it’s ok to start a session if the session is empty, but how to decide when to 
commit the data, end a session?

Best Regards
Chaoyi Huang ( Joe Huang )

From: Vega Cai [mailto:luckyveg...@gmail.commailto:luckyveg...@gmail.com]
Sent: Thursday, July 30, 2015 3:03 PM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [tricircle] DAL implementation

Hi folks,

In my current implementation, there are a core module and a models module. Core 
module handles all the database stuff, start a session, issue sql operation, 
then end a session. Models module invokes methods in core module to access 
database, as showed below:

model.py
def get_site(site_id):
core.get_resource(Site, site_id)

core.py
def get_resource(model, pk_value):
# code to access database

To add context, I am going to implement like this:

model.py
def get_site(context, site_id):
policy_check(context)
core.get_resource(Site, site_id)

core.py
def get_resource(model, pk_value):
# code to access database

So there is no need to embed session into context.

One advantage of embedding session into context is that you can combine more 
than one method calls in one session, like:

model.py
def complex_operation(context):
policy_check(context)
with context.session.begin():
core.operation1(context)
core.operation2(context)

But this approach moves session handling from core module to models module and 
core module just provides some utility methods.

I'm not sure which one is better.

BR
Zhiyuan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: Proposing Kanagaraj Manickam and Ethan Lynn for heat-core

2015-07-31 Thread Huangtianhua
+1 :)

-邮件原件-
发件人: Steve Baker [mailto:sba...@redhat.com] 
发送时间: 2015年7月31日 12:36
收件人: OpenStack Development Mailing List
主题: [openstack-dev] Proposing Kanagaraj Manickam and Ethan Lynn for heat-core

I believe the heat project would benefit from Kanagaraj Manickam and Ethan Lynn 
having the ability to approve heat changes.

Their reviews are valuable[1][2] and numerous[3], and both have been submitting 
useful commits in a variety of areas in the heat tree.

Heat cores, please express your approval with a +1 / -1.

[1] http://stackalytics.com/?user_id=kanagaraj-manickammetric=marks
[2] http://stackalytics.com/?user_id=ethanlynnmetric=marks
[3] http://stackalytics.com/report/contribution/heat-group/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Let's talk about API versions

2015-07-31 Thread Sean Dague
On 07/30/2015 04:58 PM, Devananda van der Veen wrote:
snip
 Thoughts?
 
 * I'm assuming it is possible to make micro version changes to the
 1.x API
   as 1.10.1, 1.10.2,etc
 
 
 Despite most folks calling this microversions, I have been trying to
 simply call this API version negotiation. 
 
 To your question, no -- the implementations by Nova and Ironic, and the
 proposal that the API-WG has drafted [1], do not actually support
 MAJOR.MINOR.PATCH semantics.
 
 It has been implemented as a combination of an HTTP request to
 http(s)://server URL/major/resource URI plus a
 header X-OpenStack-service-API-Version: major.minor. 
 
 The major version number is duplicated in both the URI and the header,
 though Ironic will error if they do not match. Also, there is no patch
 or micro version.
 
 So, were we to change the major version in the header, I would expect
 that we also change it in the URL, which means registering a new
 endpoint with Keystone, and, well, all of that.

Right, it's important to realize that the microversion mechanism is not
semver, intentionally. It's inspired by HTTP content negotiation, as
Deva said. I wrote up a lot of the rational for the model in Nova here,
which the Ironic model is based off of -
https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/

Ironic is a little different. It's entirely an admin API. And most users
are going to only talk to an Ironic that they own the deployment
schedule on. So the multi cloud that you don't own concern might not be
there. But, it would also be confusing to all users if Ironic goes down
a different path with microversions, and still calls it the same thing.

Fwiw, we just landed our when do you need a microversion document
which might also ad context here -
http://docs.openstack.org/developer/nova/api_microversion_dev.html#when-do-i-need-a-new-microversion

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] till when must code land to make it in liberty

2015-07-31 Thread Andreas Scheuring
Hi, 
as there is no official feature freeze in neutron anymore, there still
must be a cut off date or at least a cut off time frame for liberty
code.

Can anyone tell me when this is (round about)? Is this liberty-3?

I wonder if it's still possible to propose a RFE and get it into
liberty...

Thanks!

-- 
Andreas
(IRC: scheuran)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-07-31 Thread Gorka Eguileor
On Fri, Jul 31, 2015 at 01:47:22AM -0700, Mike Perez wrote:
 On Mon, Jul 27, 2015 at 12:35 PM, Gorka Eguileor gegui...@redhat.com wrote:
  I know we've all been looking at the HA Active-Active problem in Cinder
  and trying our best to figure out possible solutions to the different
  issues, and since current plan is going to take a while (because it
  requires that we finish first fixing Cinder-Nova interactions), I've been
  looking at alternatives that allow Active-Active configurations without
  needing to wait for those changes to take effect.
 
  And I think I have found a possible solution, but since the HA A-A
  problem has a lot of moving parts I ended up upgrading my initial
  Etherpad notes to a post [1].
 
  Even if we decide that this is not the way to go, which we'll probably
  do, I still think that the post brings a little clarity on all the
  moving parts of the problem, even some that are not reflected on our
  Etherpad [2], and it can help us not miss anything when deciding on a
  different solution.
 
 Based on IRC conversations in the Cinder room and hearing people's
 opinions in the spec reviews, I'm not convinced the complexity that a
 distributed lock manager adds to Cinder for both developers and the
 operators who ultimately are going to have to learn to maintain things
 like Zoo Keeper as a result is worth it.

Hi Mike,

I think you are right in bringing up the cost that adding a DLM to the
solution brings to operators, as it is something important to take into
consideration, and I would like to say that Ceilometer is already using
Tooz so operators are already familiar with these DLM, but unfortunately
that would be stretching the truth, since Cinder is present in 73% of
OpenStack production workloads while Ceilometer is only in 33% of them,
so we would be certainly disturbing some operators.

But we must not forget that the only operators that would need to worry
about deploying and maintaining the DLM are those wanting to deploy
Active-Active configurations (for Active-Passive configuration Tooz will
be working with local file locks like we are doing now), and some of
those may think like Duncan does: I already have to administer rabbit,
mysql, backends, horizon, load ballancers, rate limiters...  adding
redis isn't going to make it that much harder.

That's why I don't think this is such a big deal for the vast majority
of operators.

On the developer side I have to disagree, there is no difference between
using Tooz and using current oslo synchronization mechanism for non
Active-Active deployments.

 
 **Key point**: We're not scaling Cinder itself, it's about scaling to
 avoid build up of operations from the storage backend solutions
 themselves.

You must also consider that Active-Active solution will help deployments
where downtime is not an option or have SLAs with uptime or operational
requirements, it's not only about increasing volume of operations and
reducing times.

 
 Whatever people think ZooKeeper scaling level is going to accomplish
 is not even a question. We don't need it, because Cinder isn't as
 complex as people are making it.
 
 I'd like to think the Cinder team is a great in recognizing potential
 cross project initiatives. Look at what Thang Pham has done with
 Nova's version object solution. He made a generic solution into an
 Oslo solution for all, and Cinder is using it. That was awesome, and
 people really appreciated that there was a focus for other projects to
 get better, not just Cinder.

To be fair, Tooz is just one of those cross project initiatives you are
describing, it's a generic solution that can be used in all projects,
not just Ceilometer.

 
 Have people consider Ironic's hash ring solution? The project Akanda
 is now adopting it [1], and I think it might have potential. I'd
 appreciate it if interested parties could have this evaluated before
 the Cinder midcycle sprint next week, to be ready for discussion.
 

I will have a look at the hash ring solution you mention and see if it
makes sense to use it.

And I would really love to see the HA A-A discussion enabled for remote
people, as some of us are interested in the discussion but won't be able
to attend.  In my case problems with living in the Old World  :-(

In a way I have to agree with you that sometimes we make Cinder look
more complex than it really is, and in my case the solution I proposed
in the post was way too complex as it has been pointed out.  I just
tried to solve de A-A problem and fix some other issues like recovering
lost jobs (those waiting for locks) at the same time.

There is an alternative solution I am considering that will be much
simpler and will align with Walter's efforts to remove locks from the
Volume Manager.  I just need to give it a hard think to make sure the
solution has all bases covered.

The main reason why I am suggesting using Tooz and a DLM is because I
think it will allow us to reach Active-Active faster and with less
effort, not because I 

Re: [openstack-dev] [nova] Thoughts on things that don't make freeze cutoffs

2015-07-31 Thread John Garbutt
On 29 July 2015 at 19:13, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:
 We talked a bit about this at the nova mid-cycle last week but I can't say I
 completely remember all of the points made, since I feel like we talked
 ourselves into a circle that got us back to more or less 'the current specs
 and freeze process is the least of all evils so let's not change it'.

We did agree some small changes:
* open nova-specs for mitaka now
* refine the definition of the spec backlog

Continuing the changes since the summit:
* continue trying to focus review effort and get subteams recommending
priorities: http://etherpad.openstack.org/p/liberty-nova-priorities-tracking
* continue to actively mentor people around reviews, in the hope they
can join nova-core

The possible future changes we discussed were:
* look into how we adopt phabricator
* look at options adopting runways/kanban using phabricator

(I plan to do a thread on things from the midcycle next week, after
the immediate deadlines are all sorted)

But yes, in general, the reasons we have most of the processes in
place still hold, and it still seems the fairest way forward for both
our users and developers.

Its hard to imagine the progress we have made around Upgrades and v2.1
without this approach. Cells v2 and the Scheduler enhancements should
provide the ground work for a whole heap of crucial features our users
are requesting.

 Tomorrow is the feature freeze for nova but there is interest from a few
 people in getting rbd snapshot support into liberty.  The code is up for
 review [1] but the spec is not approved [2].

 With the process we have in place we can basically -2 this and say
 re-propose it for mitaka.

 One thing mentioned at the mid-cycle was what if people reviewed the spec
 and approved it, but marked the blueprint for 'next' so it's not actually
 tracked for liberty, but if the blueprint is approved and people are willing
 to review the code, it could land in liberty.

 The obvious downside here is the review burden, we have freeze deadlines in
 place to avoid a continual demand for feature review when we have bugs to
 fix before we release.  And it's not fair to the other people that already
 got their specs and blueprints approved with code up weeks or months ago but
 haven't had review attention and will just be deferred to another release.
 So we just end up with everyone being unhappy. :)

The idea of the freeze is to concentrate review effort on merging more
bug fixes and code related to priorities.

Honestly, delaying any of the blueprints really depresses me. People
have worked hard on following all our processes, and preparing the
code, almost always fixing real problems and enabling new use cases
for users that really need those features. But at some point, we have
to stop reviewing those, so we can work on our community agreed
priorities, and fix the issues in what we already have in tree.

 I'm trying to think of a way in which we could say, yeah, we've looked at
 the spec and this looks good, but don't expect it to land in the current
 release given other priorities and all of the other things we have actually
 agreed to review for this release (blueprint-wise). Basically your change
 goes on a backlog and come mitaka your blueprint could be moved from 'next'
 to the current release so it's part of the dashboard for tracking.

I was assuming we would do that by merging the nova-spec in Mitaka
directory, or for spec-less blueprints, we can just approve the
blueprint for Mitaka (via the nova-meeting in the usual way).

Or am I miss-reading your suggestion?

 I'm not sure how that would work with re-proposing the spec, I assume that
 would stay the same, but at least then you can just slap the
 'previously-approved' tag in the spec commit and it's a fast path to
 re-approval.

 This would also avoid the -2 on the code change which might prevent people
 from reviewing it thinking it's a waste of time.

 The goal is to ease the frustration of people trying to get specs/blueprints
 approved after freeze but also balance that with an understanding that there
 are priorities and limits to the amount of time reviewers can spend on these
 things - so you know, don't come nagging in IRC every 5 minutes to review
 your thing which isn't planned for the current release. Is there a middle
 ground?

I hope so.

For context, we had around 100 pending spec reviews and at the same
time around 100 blueprints approved, at one point this cycle (I think
those number are about right). Hence the move to draw a line in the
sand, and get folks to shout up if they felt they were on the wrong
side of that. We had a week or so time window for that, which I
totally acknowledge is hard work and easy to miss, even more so for
folks not working on Nova full time. We did pre-announce the dates a
month or so in advance, the process to try and make that easier to
deal with, I am actively looking/thinking for better ways to do this.

 This is just 

Re: [openstack-dev] [Nova] Non-priority Feature Freeze is Tomorrow (July 30th)

2015-07-31 Thread John Garbutt
On 30 July 2015 at 09:56, John Garbutt j...@johngarbutt.com wrote:
 On 29 July 2015 at 19:20, John Garbutt j...@johngarbutt.com wrote:
 Hi,

 Tomorrow is: Non-priority Feature Freeze

 What does this mean? Well...

 * bug fixes: no impact, still free to merge
 * priority features: no impact, still free to merge
 * clean ups: sure we could merge those
 * non-priority features (i.e. blueprints with a low priority), you are
 no longer free to merge (we are free to re-approve previously approved
 things, due to gate issues / merge issues, if needed)

 Please note, the full Feature Freeze (and string freeze, etc), are
 when we tag liberty-3, which is expected to be on or just after
 September 1.

 This is all about focusing on merging more Bug Fixes and more Priority
 Features. For more details, please see:
 https://wiki.openstack.org/wiki/Nova/Liberty_Release_Schedule#Why_is_there_a_non-priority_Feature_Freeze_in_Nova.3F

 Its the same we did last release:
 http://lists.openstack.org/pipermail/openstack-dev/2015-February/056208.html

 Exceptions, I hear you cry? Lets follow a similar process as we did last 
 time...

 If you want an exception:
 * please add request in here:
https://etherpad.openstack.org/p/liberty-nova-non-priority-feature-freeze
 * make sure you make your request before the end of Wednesday 6th August
 * nova-drivers will meet to decide what gets an exception (as before)
 * with the aim to merge the code for all exceptions by the end of
 Monday 10th August

 I have added this detail into the wiki, as we refine the details:
 https://wiki.openstack.org/wiki/Nova/Liberty_Release_Schedule#Non-priority_Feature_Freeze

 There are unlikely to be many exceptions given, its really just for
 exceptional reasons why it didn't get merged in time.

 The folks in the nova-meeting tomorrow may need to refine this
 process, but if anything changes, we can send out an update to the ML.

 Thanks,
 johnthetubaguy

 PS
 Due to time constraints, its likely that it will be on Monday 3rd
 August, that I will -2 all non-priority blueprint patches, and
 un-approve all low priority blueprints, unless someone gets to that
 first.

 Actually, I should be able to do this on Friday morning, as normal.
 Bad timing, but I am mostly away from my computer over the next few
 days, but I am watching email a bit. (Something that was booked before
 the release dates were announced)

 Note, I don't plan on blocking things that are just pending a merge.
 I will only block things that don't have two +2 votes on them.
 This should help us keep productive through the gate congestion/issues.

 Thanks,
 John

OK, so the gate and check queue are really not helping us here.

Lets move the deadline till midnight (lets say PST) Sunday 2nd August.

On Monday afternoon (in the UK sense), I will go through and defer low
priority blueprints that don't have a +2. We can wait a bit long to
get things merged, if we have to, as its the code review capacity we
are trying to optimise for here.

Hopefully that should help us get a few more things completed, in the
face of the gate issues.

Thanks,
John

PS
Please do mark your blueprints as complete when the code merges, if
possible. That helps stop me guessing if it is complete or not.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: Return request-id to caller

2015-07-31 Thread Kekane, Abhishek
Hi Devs,

I have modified the cross-project specs for request-id based on this analysis 
and submitted same for review.
Please refer, 
https://review.openstack.org/#/c/156508/17/specs/return-request-id.rst and give 
your valuable feedback on the same.

Thank you in advance.

Abhishek Kekane

-Original Message-
From: Gorka Eguileor [mailto:gegui...@redhat.com] 
Sent: 29 July 2015 15:07
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] cross project communication: Return 
request-id to caller

On Tue, Jul 28, 2015 at 09:48:25AM -0400, Doug Hellmann wrote:
 Excerpts from Gorka Eguileor's message of 2015-07-28 10:37:42 +0200:
  On Fri, Jul 24, 2015 at 10:08:45AM -0400, Doug Hellmann wrote:
   Excerpts from Kekane, Abhishek's message of 2015-07-24 06:33:00 +:
Hi Devs,

X-Openstack-Request-Id. We have analysed python-cinderclient, 
python-glanceclient, python-novaclient, python-keystoneclient and 
python-neutronclient to check the return types.

There are 9 ways return values are returned from python clients:
1. List
2. Dict
3. Resource class object
4. None
5. Tuple
6. Exception
7. Boolean (True/False, for keystoneclient) 8. Generator (for 
list api's in glanceclient) 9. String (for novaclient)

Out of 9 we have solution for all return types except generator.
In case of glance-client list api's are returning generator which is 
immutable. So it is not possible to return request-id in this case, 
which is a blocker for adopting the solution.

I have added detail analysis for above return types in etherpad [2] as 
solution #3.

If you have any suggestion in case of generation type then please let 
me know.
   
   It should be possible to create a new class to wrap the existing 
   generator and implement the iterator protocol [3].
   
   [3] 
   https://docs.python.org/2/reference/expressions.html#generator-ite
   rator-methods
   
   Doug
   
  
  Unless I'm missing something I think we wouldn't even need to create 
  a new class that implements the iterator protocol, we can just 
  return a generator that generates from the other one.
  
  For example, for each of the requests, if we get the generator in 
  variable *result* that returns dictionaries and we want to add 
  *headers* to each dictionary:
  
  return (DictWithHeaders(resource, headers) for resource in result)
  
  Wouldn't that work?
 
 That would work, but it wouldn't be consistent with the way I read [2] 
 as describing how the other methods are being updated.
 
 For example, a method that now returns a list() will return a 
 ListWithHeaders(), and only that returned object will have the 
 headers, not its contents. A caller could do:

You are right, it wouldn't be consistent with other methods, and that's not 
good. So the new iterator class wrapper seems to be the way to go.


Gorka.

 
   response = client.some_method_returning_a_list()
   print reponse.headers
 
 but could not do
 
   print response[0].headers
 
 and get the same values.
 
 Creating a GeneratorWithHeaders class mirrors that behavior for 
 methods that return generators instead of lists.
 
 Doug
 
[2] https://etherpad.openstack.org/p/request-id
 
 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tempest]No way to skip S3 related tests

2015-07-31 Thread Jordan Pittier
Hi,
With the commit [1] minimize the default services that happened in April,
nova-objectstore is not run by default. Which means that by default,
Devstack doesnt provide any S3 compatible API (because swift3 is not
enabled by default, of course).

Now, I don't see any config flag or mechanism in Tempest to skip S3 related
tests. So, out of the box, we can't have a full green Tempest run.

Note that there is a Tempest config flag compute_feature_enabled.ec2_api.
And there's also a mechanism implemented in 2012 by afazekas (see [2]) that
tried to skip S3 tests if an HTTP connection to  'boto.s3_url' failed with
a NetwordError, but that mechanism doesnt work anymore: the tests are not
properly skipped.

I'd like your opinion on the correct way to fix stuff:
1) Either introduce a object_storage_feature_enabled.s3_api flag in Tempest
and skip S3 tests if the value is False. This requires an additionnal
patch to devstack to properly set the value of
object_storage_feature_enabled.s3_api flag

2) Or, try to fix the mechanism in tempest/thirdparty/boto/test.py that
auto-magically skips the S3 tests on NetworkError.

What do you think ?

Jordan


[1]
https://github.com/openstack-dev/devstack/commit/279cfe75198c723519f1fb361b2bff3c641c6cef
[2]
https://github.com/openstack/tempest/commit/a23f500725df8d5ae83f69eb4da5e47736fbb647#diff-ea760d854610bfed1ae3daa4ac242f74R133
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev