Re: [openstack-dev] [puppet][keystone] Keystone resource naming with domain support - no '::domain' if 'Default'

2015-08-27 Thread Gilles Dubreuil


On 27/08/15 16:59, Gilles Dubreuil wrote:
 
 
 On 26/08/15 06:30, Rich Megginson wrote:
 This concerns the support of the names of domain scoped Keystone
 resources (users, projects, etc.) in puppet.

 At the puppet-openstack meeting today [1] we decided that
 puppet-openstack will support Keystone domain scoped resource names
 without a '::domain' in the name, only if the 'default_domain_id'
 parameter in Keystone has _not_ been set.  That is, if the default
 domain is 'Default'.  In addition:

 * In the OpenStack L release, if 'default_domain_id' is set, puppet will
 issue a warning if a name is used without '::domain'.

The default domain is always set to 'default' unless overridden to
something else.

 * In the OpenStack M release, puppet will issue a warning if a name is
 used without '::domain', even if 'default_domain_id' is not set.

Therefore the 'default_domain_id' is never 'not set'.

 * In N (or possibly, O), resource names will be required to have
 '::domain'.


I understand, from Openstack N release and ongoing, the domain would be
mandatory.

So I would like to revisit the list:

* In OpenStack L release:
  Puppet will issue a warning if a name is used without '::domain'.

* In OpenStack M release:
  Puppet will issue a warning if a name is used without '::domain'.

* From Openstack N release:
  A name must be used with '::domain'.


 
 +1
 
 The current spec [2] and current code [3] try to support names without a
 '::domain' in the name, in non-default domains, provided the name is
 unique across _all_ domains.  This will have to be changed in the
 current code and spec.

 
 Ack
 

 [1]
 http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-08-25-15.01.html

 [2]
 http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/api-v3-support.html

 [3]
 https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_user/openstack.rb#L217



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] PLEASE READ: VPNaaS API Change - not backward compatible

2015-08-27 Thread Paul Michali
Thanks for the links and thoughtful comments James!

In the Heat documentation, is the subnet ID being treated as optional (or
is the documentation not correct)? I think it is a required argument in the
REST API. Ref:
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::VPNService

It really sounds like we need to support both versions simultaneously, to
avoid some of the complexity that you are mentioning (I didn't know about
the Heat aspects).

We (development) should focus on formulating a plan for supporting both
versions. Hopefully my proposal in https://review.openstack.org/#/c/191944/ is
a good starting point.  I'll respond to comments on that today.

Regards,

Paul Michali (pc_m)

P.S. BTW: No offense taken in comments - email is such an imprecise
communication medium.



On Wed, Aug 26, 2015 at 7:11 PM James Dempsey jam...@catalyst.net.nz
wrote:

 On 26/08/15 23:43, Paul Michali wrote:
  James,
 
  Great stuff! Please see @PCM in-line...
 
  On Tue, Aug 25, 2015 at 6:26 PM James Dempsey jam...@catalyst.net.nz

 SNIP

  1) Horizon compatibility
 
  We run a newer version of horizon than we do neutron.  If Horizon
  version X doesn't work with Neutron version X-1, this is a very big
  problem for us.
 
 
  @PCM Interesting. I always thought that Horizon updates lagged Neutron
  changes, and this wouldn't be a concern.
 

 @JPD
 Our Installed Neutron typically lags Horizon by zero or one release.  My
 concern is how will Horizon version X cope with a point-in-time API
 change?  Worded slightly differently: We rarely update Horizon and
 Neutron at the same time so there would need to be a version(or
 versions) of Horizon that could detect a Neutron upgrade and start using
 the new API.  (I'm fine if there is a Horizon config option to select
 old/new VPN API usage.)

 
 
 
  2) Service interruption
 
  How much of a service interruption would the 'migration path' cause?
 
 
  @PCM The expectation of the proposal is that the migration would occur as
  part of the normal OpenStack upgrade process (new services installed,
  current services stopped, database migration occurs, new services are
  started).
 
  It would have the same impact as what would happen today, if you update
  from one release to another. I'm sure you folks have a much better handle
  on that impact and how to handle it (maintenance windows, scheduled
  updates, etc).
 

 @JPD This seems fine.

 
  We
  all know that IPsec VPNs can be fragile...  How much of a guarantee will
  we have that migration doesn't break a bunch of VPNs all at the same
  time because of some slight difference in the way configurations are
  generated?
 
 
  @PCM I see the risk as extremely low. With the migration, the end result
 is
  really just moving/copying fields from one table to another. The
 underlying
  configuration done to *Swan would be the same.
 
  For example, the subnet ID, which is specified in the VPN service API and
  stored in the vpnservices table, would be stored in a new vpn_endpoints
  table, and the ipsec_site_connections table would reference that entry
  (rather than looking up the subnet in the vpnservices table).
 

 @JPD This makes me feel more comfortable; thanks for explaining.

 
 
  3) Heat compatibility
 
  We don't always run the same version of Heat and Neutron.
 
 
  @PCM I must admit, I've never used Heat, and am woefully ignorant about
 it.
  Can you elaborate on Heat concerns as may be related to VPN API
 differences?
 
  Is Heat being used to setup VPN connections, as part of orchestration?
 

 @JPD
 My concerns are two-fold:

 1) Because Heat makes use of the VPNaaS API, it seems like the same
 situation exists as with Horizon.  Some version or versions of Heat will
 need to be able to make use of both old and new VPNaaS APIs in order to
 cope with a Neutron upgrade.

 2) Because we use Heat resource types like
 OS::Neutron::IPsecSiteConnection [1], we may lose the ability to
 orchestrate VPNs if endpoint groups are not added to Heat at the same time.


 Number 1 seems like a real problem that needs a fix.  Number 2 is a fact
 of life that I am not excited about, but am prepared to deal with.

 Yes, Heat is being used to build VPNs, but I am prepared to make the
 decision on behalf of my users... VPN creation via Heat is probably less
 important than the new VPNaaS features, but it would be really great if
 we could work on the updated heat resource types in parallel.

 [1]

 http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::IPsecSiteConnection

 
 
 
 
  Is there pain for the customers beyond learning about the new API
  changes
  and capabilities (something that would apply whether there is backward
  compatibility or not)?
 
 
  See points 1,2, and 3 above.
 
 
 
  Another implication of not having backwards compatibility would be that
  end-users would need to immediately switch to using the new API, once
 the
  migration occurs, versus doing so 

Re: [openstack-dev] [puppet][keystone] Keystone resource naming with domain support - no '::domain' if 'Default'

2015-08-27 Thread Gilles Dubreuil


On 27/08/15 22:40, Gilles Dubreuil wrote:
 
 
 On 27/08/15 16:59, Gilles Dubreuil wrote:


 On 26/08/15 06:30, Rich Megginson wrote:
 This concerns the support of the names of domain scoped Keystone
 resources (users, projects, etc.) in puppet.

 At the puppet-openstack meeting today [1] we decided that
 puppet-openstack will support Keystone domain scoped resource names
 without a '::domain' in the name, only if the 'default_domain_id'
 parameter in Keystone has _not_ been set.  That is, if the default
 domain is 'Default'.  In addition:

 * In the OpenStack L release, if 'default_domain_id' is set, puppet will
 issue a warning if a name is used without '::domain'.
 
 The default domain is always set to 'default' unless overridden to
 something else.

Just to clarify, I don't see any logical difference between the
default_domain_id to be 'default' or something else.

Per keystone.conf comment (as seen below) the default_domain_id,
whatever its value, is created as a valid domain.

# This references the domain to use for all Identity API v2 requests
(which are not aware of domains). A domain with this ID will be created
for you by keystone-manage db_sync in migration 008. The domain
referenced by this ID cannot be deleted on the v3 API, to prevent
accidentally breaking the v2 API. There is nothing special about this
domain, other than the fact that it must exist to order to maintain
support for your v2 clients. (string value)
#default_domain_id = default

To be able to test if a 'default_domain_id' is set or not, actually
translates to checking if the id is 'default' or something else.
But I don't see the point here. If a user decides to change default' to
'This_is_the_domain_id_for_legacy_v2, how does this help?

If that makes sense then I would actually avoid the intermediate stage:

* In OpenStack L release:
Puppet will issue a warning if a name is used without '::domain'.

* From Openstack M release:
A name must be used with '::domain'.

 
 * In the OpenStack M release, puppet will issue a warning if a name is
 used without '::domain', even if 'default_domain_id' is not set.
 
 Therefore the 'default_domain_id' is never 'not set'.
 
 * In N (or possibly, O), resource names will be required to have
 '::domain'.

 
 I understand, from Openstack N release and ongoing, the domain would be
 mandatory.
 
 So I would like to revisit the list:
 
 * In OpenStack L release:
   Puppet will issue a warning if a name is used without '::domain'.
 
 * In OpenStack M release:
   Puppet will issue a warning if a name is used without '::domain'.
 
 * From Openstack N release:
   A name must be used with '::domain'.
 
 

 +1

 The current spec [2] and current code [3] try to support names without a
 '::domain' in the name, in non-default domains, provided the name is
 unique across _all_ domains.  This will have to be changed in the
 current code and spec.


 Ack


 [1]
 http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-08-25-15.01.html

 [2]
 http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/api-v3-support.html

 [3]
 https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_user/openstack.rb#L217



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Code review process in Fuel and related issues

2015-08-27 Thread Aleksandr Didenko
Hi,

I'm all in for any formalization and automation of review process. The only
concern that I see here is about core reviewers involvement metrics. If we
succeed in reducing the load on core reviewers, it will mean that core
reviewers will do less code reviews. This could lead to core reviewer
demotion.

 - Contributor finds SME to review the code. Ideally, contributor can have
his/her peers to help with code review first. Contributor doesn’t bother
SME, if CI has -1 on a patch proposed

I like the idea with adding reviewers automatically based on MAINTAINERS
file. In such case we can drop this ^^ part of instruction. It would be
nice if Jenkins could add reviewers after CI +1, or we can use gerrit
dashboard for SMEs to not waste their time on review that has not yet
passed CI and does not have +1 from other reviewers.

Regards,
Alex


On Wed, Aug 19, 2015 at 11:31 AM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Hi all,
 let's discuss code review process in Fuel and what we can improve. For
 those who want to just have a quick context of this email, please check out
 presentation slides [5].

 ** Issues **
 Depending on a Fuel subproject, I'm aware of two buckets of issues with
 code review in Fuel:
 a) It is hard to get code reviewed and merged
 b) Quality of code review itself could be better

 First bucket:
 1) It is hard to find subject matter experts who can help and core
 reviewers for the area of code, especially if you are new to the project
 2) Contributor sometimes receives contradicting opinions from other
 reviewers, including cores
 3) Assigned / responsible core reviewer is needed for a feature in order
 to help in architectural negotiations, guiding through, landing the code
 into master
 4) Long wait time for getting code reviewed

 Quality-related items:
 5) Not thorough enough, full review in one shot. For example, reviewer can
 put -1 due to missed comma, but do not notice major gap in the code. It
 leads to many patch sets, and demotivation of contributors
 6) Some of the core reviewers decreased their involvement, and so number
 of reviews has dropped dramatically. However, they still occasionally merge
 code. I propose to remove these cores, and get them back if their
 involvement is increased back again (I very rarely merge code, but I'm one
 of those to be removed from cores). This is standard practice in OpenStack
 community as well, see Neutron as example [4, line 270].
 7) As a legacy of the past, we still have old core reviewers being able to
 merge code in all Fuel repos. All new cores have core rights only for
 single repo, which is their primary area of expertise. For example, core
 team size for fuel-library is adidenko + whole fuel-core group [7]. In
 fact, there are just 4 trusted or real core reviewers in fuel-library,
 not the whole fuel-core group.

 These problems are not new to OpenStack and open source in general. You
 can find discussions about same and similar issues in [1], [2], [3].


 ** Analysis of data **
 In order to understand what can be improved, I mined the data at first.
 Main source of information was stackalytics.com. Please take a look at
 few graphs on slides 4-7 [5], built based on data from stackalytics. Major
 conclusions from these graphs:
 1) Rather small number of core reviewers (in comparison with overall
 number of contributors) reviewing 40-60% of patch sets, depending on repo
 (40% fuel-library, 60% fuel-web). See slide #4.
 2) Load on core reviewers in Fuel team is higher in average, if you
 compare it with some other OpenStack projects. Average load on core
 reviewer across Nova, Keystone, Neutron and Cinder is 2.5 reviews a day. In
 Fuel though it is 3.6 for fuel-web and 4.6 for fuel-library. See slide #6.
 3) Statistics on how fast feedback on code proposed is provided:
 - fuel-library: 2095 total reviews in 30 days [13], 80 open reviews,
 average wait time for reviewer - 1d 1h [12]
 - fuel-web: 1789 total reviews in 30 days [14], 52 open reviews, average
 wait time for reviewer - 1d 17h [15]

 There is no need to have deep analysis on whether we have well defined
 areas of ownership in Fuel components or not: we don’t have it formally
 defined, and it’s not documented anywhere. So, finding a right core
 reviewer can be challenging task for a new contributor to Fuel, and this
 issue has to be addressed.


 ** Proposed solution **
 According to stackalytics, for the whole fuel-group we had 262 reviewers
 with 24 core reviewers for the past 180 days [19]. I think that these
 numbers can be considered as high enough in order to think about structure
 in which code review process would be transparent, understandable and
 scalable.

 Let’s first agree on the terminology which I’d like to use. It can take
 pages of precise definitions, however in this email thread I’d like to
 focus on code review process more, and hopefully high level description of
 roles would be enough for now.
 - Contributor: new contributor, who doesn’t work on 

Re: [openstack-dev] [Fuel] Code review process in Fuel and related issues

2015-08-27 Thread Davanum Srinivas
Mike,

This is a great start.

1) I'd advise to codify a proposal in fuel-specs under a 'policy' directory
(obviously as a review in fuel-specs repo) So everyone agrees to the
structure of the teams and terminology etc. Example oslo uses a directory
to write down some of our decisions.
http://git.openstack.org/cgit/openstack/oslo-specs/tree/specs/policy

2) We don't have SME terminology, but we do have Maintainers both in
oslo-incubator (
http://git.openstack.org/cgit/openstack/oslo-incubator/tree/MAINTAINERS)
and in Rally (https://rally.readthedocs.org/en/latest/project_info.html) So
let's use that

3) Is there a plan to split existing repos to more repos? Then each repo
can have a core team (one core team for one repo), PTL takes care of all
repos and MAINTAINERS take care of directories within a repo. That will
line up well with what we are doing elsewhere in the community (essentially
Component Lead is a core team which may not be a single person).

We do not have a concept of SLA anywhere that i know of, so it will have to
be some kind of social consensus and not a real carrot/stick. One way is to
publish/use data about reviews like stackalytics or russell's site (
http://russellbryant.net/openstack-stats/fuel-openreviews.html) and policy
the reviews that drop off the radar during the weekly meetings or something
like that.

Thanks,
Dims

On Wed, Aug 19, 2015 at 4:31 AM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Hi all,
 let's discuss code review process in Fuel and what we can improve. For
 those who want to just have a quick context of this email, please check out
 presentation slides [5].

 ** Issues **
 Depending on a Fuel subproject, I'm aware of two buckets of issues with
 code review in Fuel:
 a) It is hard to get code reviewed and merged
 b) Quality of code review itself could be better

 First bucket:
 1) It is hard to find subject matter experts who can help and core
 reviewers for the area of code, especially if you are new to the project
 2) Contributor sometimes receives contradicting opinions from other
 reviewers, including cores
 3) Assigned / responsible core reviewer is needed for a feature in order
 to help in architectural negotiations, guiding through, landing the code
 into master
 4) Long wait time for getting code reviewed

 Quality-related items:
 5) Not thorough enough, full review in one shot. For example, reviewer can
 put -1 due to missed comma, but do not notice major gap in the code. It
 leads to many patch sets, and demotivation of contributors
 6) Some of the core reviewers decreased their involvement, and so number
 of reviews has dropped dramatically. However, they still occasionally merge
 code. I propose to remove these cores, and get them back if their
 involvement is increased back again (I very rarely merge code, but I'm one
 of those to be removed from cores). This is standard practice in OpenStack
 community as well, see Neutron as example [4, line 270].
 7) As a legacy of the past, we still have old core reviewers being able to
 merge code in all Fuel repos. All new cores have core rights only for
 single repo, which is their primary area of expertise. For example, core
 team size for fuel-library is adidenko + whole fuel-core group [7]. In
 fact, there are just 4 trusted or real core reviewers in fuel-library,
 not the whole fuel-core group.

 These problems are not new to OpenStack and open source in general. You
 can find discussions about same and similar issues in [1], [2], [3].


 ** Analysis of data **
 In order to understand what can be improved, I mined the data at first.
 Main source of information was stackalytics.com. Please take a look at
 few graphs on slides 4-7 [5], built based on data from stackalytics. Major
 conclusions from these graphs:
 1) Rather small number of core reviewers (in comparison with overall
 number of contributors) reviewing 40-60% of patch sets, depending on repo
 (40% fuel-library, 60% fuel-web). See slide #4.
 2) Load on core reviewers in Fuel team is higher in average, if you
 compare it with some other OpenStack projects. Average load on core
 reviewer across Nova, Keystone, Neutron and Cinder is 2.5 reviews a day. In
 Fuel though it is 3.6 for fuel-web and 4.6 for fuel-library. See slide #6.
 3) Statistics on how fast feedback on code proposed is provided:
 - fuel-library: 2095 total reviews in 30 days [13], 80 open reviews,
 average wait time for reviewer - 1d 1h [12]
 - fuel-web: 1789 total reviews in 30 days [14], 52 open reviews, average
 wait time for reviewer - 1d 17h [15]

 There is no need to have deep analysis on whether we have well defined
 areas of ownership in Fuel components or not: we don’t have it formally
 defined, and it’s not documented anywhere. So, finding a right core
 reviewer can be challenging task for a new contributor to Fuel, and this
 issue has to be addressed.


 ** Proposed solution **
 According to stackalytics, for the whole fuel-group we had 262 reviewers
 with 24 core 

Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-27 Thread Ivan Kolodyazhny
Hi,

Looks like we need to be able to set AZ per backend. What do you think
about such option?

Regards,
Ivan Kolodyazhny

On Mon, Aug 10, 2015 at 7:07 PM, John Griffith john.griffi...@gmail.com
wrote:



 On Mon, Aug 10, 2015 at 9:24 AM, Dulko, Michal michal.du...@intel.com
 wrote:

 Hi,

 In Kilo cycle [1] was merged. It started passing AZ of a booted VM to
 Cinder to make volumes appear in the same AZ as VM. This is certainly a
 good approach, but I wonder how to deal with an use case when administrator
 cares about AZ of a compute node of the VM, but wants to ignore AZ of
 volume. Such case would be when fault tolerance of storage is maintained on
 another level - for example using Ceph replication and failure domains.

 Normally I would simply disable AvailabilityZoneFilter in cinder.conf,
 but it turns out cinder-api validates if availability zone is correct [2].
 This means that if Cinder has no AZs configured all requests from Nova will
 fail on an API level.

 Configuring fake AZs in Cinder is also problematic, because AZ cannot be
 configured on a per-backend manner. I can only configure it per c-vol node,
 so I would need N extra nodes running c-vol,  where N is number of AZs to
 achieve that.

 Is there any solution to satisfy such use case?

 [1] https://review.openstack.org/#/c/157041
 [2]
 https://github.com/openstack/cinder/blob/master/cinder/volume/flows/api/create_volume.py#L279-L282

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ​Seems like we could introduce the capability in cinder to ignore that if
 it's desired?  It would probably be worth looking on the Cinder side at
 being able to configure multiple AZ's for a volume (perhaps even an
 aggregate Zone just for Cinder).  That way we still honor the setting but
 provide a way to get around it for those that know what they're doing.

 John


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [openstack-infra][third-party][CI][nodepool] Uploading images to nodepool.

2015-08-27 Thread Asselin, Ramy
[2] is the best option to upload the image.

Yes, the install_master.sh script manually runs puppet to apply your dev 
changes to nodepool.yaml and vars.sh to your production configuration (e.g. 
/etc/nodepool/nodepool.yaml).

Ramy

From: Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
Sent: Wednesday, August 26, 2015 10:48 PM
To: OpenStack Development Mailing List (not for usage questions); 
openstack-in...@lists.openstack.org
Subject: [OpenStack-Infra] [openstack-infra][third-party][CI][nodepool] 
Uploading images to nodepool.

Hi Folks,

I am following Ramy's new guide for setting up the CI. Till now I have 
installed master and created the slave node image using [1]. Now I want to 
upload the image to nodepool, so can I use [2] to do so, or is there any other 
way also to do so.


  *   Also is there any other changes that need to be done, in vars.sh [3] and 
nodepool.yaml [4]?
  *   And do I need to reinstall the master after setting up the changes in 
nodepool.yaml and vars.sh?

Please enlighten me with your ideas folks.

[1] 
https://github.com/openstack-infra/project-config/tree/master/nodepool/elements#using-diskimage-builder-to-build-devstack-gate-nodes
[2] nodepool image-upload all image-name
[3] 
https://github.com/rasselin/os-ext-testing-data/blob/master/vars.sh.sample#L17
[4] 
https://github.com/rasselin/os-ext-testing-data/blob/master/etc/nodepool/nodepool.yaml.erb.sample

--
[https://docs.google.com/uc?export=downloadid=0Byq0j7ZjFlFKV3ZCWnlMRXBCcU0revid=0Byq0j7ZjFlFKa2V5VjdBSjIwUGx6bUROS2IrenNwc0kzd2IwPQ]
Thanks  Regards,
Abhishek
Cloudbyte Inc.http://www.cloudbyte.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] [style] multi-line imports PEP 0328

2015-08-27 Thread Ian Cordasco


On 8/25/15, 10:58, Clay Gerrard clay.gerr...@gmail.com wrote:



On Tue, Aug 25, 2015 at 8:45 AM, Kevin L. Mitchell
kevin.mitch...@rackspace.com wrote:

On Mon, 2015-08-24 at 22:53 -0700, Clay Gerrard wrote:
 So, I know that hacking has H301 (one import per line) - but say maybe
 you wanted to import *more* that one thing on a line (there's some
 exceptions right?  sqlalchemy migrations or something?)

There's never a need to import more than one thing per line given the
rule to only import modules, not objects.  While that is not currently
enforced by hacking, it is a strong style guideline.  (Exceptions for
things like sqlalchemy do exist, of course.)




Thank you for echoing my premise - H301 exists, but there are exceptions,
so...


On Mon, 2015-08-24 at 22:53 -0700, Clay Gerrard wrote:

Anyway - I'm sure there could be a pep8 plugin rule that enforces use of
parenthesis for multi line imports instead backslash line breaks [1] -
but would that be something that hacking would want to carry (since
 *most* of the time H301 would kick in first?) - or if not; is there a
way to plug it into pep8 outside of hacking without having to install
some random one-off extension for this one rule separately?



-Clay

So, I'm fairly certain that if it isn't on by default, that pep8 has a
check for lines that end in \. It will apply to import statements. That
said, by turning off the hacking checks around imports you lose some of
the consistency. So if you do that, consider flake8-import-order as a
plugin. It allows for multiple (non-module) imports on a line but insists
they be ordered appropriately and such.

Cheers,
Ian
Flake8 core developer, maintainer
Hacking core reviewer
pep8, pyflakes, mccabe, etc. maintainer/core developer/whatever

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][keystone][openstackclient] Standards for object name attributes and filtering

2015-08-27 Thread David Chadwick
Hi Henry

in principle I think it is a good idea to have a user friendly name
attribute for every entity. The name should be unique amongst the same
set of entities (though not between entities since context should imply
what entity you are referring to), otherwise the name would have to be
combined with the ID to identify it, and then you would loose the
usability advantage of having a name.

Eg. when setting up a mapping rule in Horizon it is more user friendly
to refer to domains and groups etc. using their user friendly names
rather than their IDs, even if the mapping rule requires the ID in JSON.

regards

David

On 26/08/2015 10:45, Henry Nash wrote:
 Hi
 
 With keystone, we recently came across an issue in terms of the assumptions 
 that the openstack client is making about the entities it can show - namely 
 that is assumes all entries have a ‘name’ attribute (which is how the 
 openstack show command works). Turns out, that not all keystone entities 
 have such an attribute (e.g. IDPs for federation) - often the ID is really 
 the name. Is there already agreement across our APIs that all first class 
 entities should have a ‘name’ attribute?  If we do, then we need to change 
 keystone, if not, then we need to change openstack client to not make this 
 assumption (and perhaps allow some kind of per-entity definition of which 
 attribute should be used for ‘show’).
 
 A follow on (and somewhat related) question to this, is whether we have 
 agreed standards for what should happen if some provides an unrecognized 
 filter to a list entities API request at the http level (this is related 
 since this is also the hole osc fell into with keystone since, again, ‘name’ 
 is not a recognized filter attribute). Currently keystone ignores filters it 
 doesn’t understand (so if that was your only filter, you would get back all 
 the entities). The alternative approach would of course be to return no 
 entities if the filter is on an attribute we don’t recognize (or even issue a 
 validation or bad request exception).  Again, the question is whether we have 
 agreement across the projects for how such unrecognized filtering should be 
 handled?
 
 Thanks
 
 Henry
 Keystone Core
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removing unused dependencies like:'discover' module from all projects

2015-08-27 Thread Robert Collins
It's not needed for 2.6 either - unit test 2 includes a more up to date
discover implementation.
On 28 Aug 2015 5:21 am, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:



 On 8/27/2015 6:26 AM, Chandan kumar wrote:

 Hello,

 I am packaging 'discover' module 
 https://bugzilla.redhat.com/show_bug.cgi?id=1251951  for RDO.
 Since, this module is not maintained yet as per this
 http://code.google.com/p/unittest-ext/
 and this module is used as a test-dependencies in all the projects as
 per 'openstack-requirements' module 

 https://github.com/openstack/requirements/blob/master/global-requirements.txt#L246
   and i have a discussion with lifeless regarding that
 https://github.com/testing-cabal/unittest-ext/issues/96 

 and he has proposed a fix on that:
 https://review.openstack.org/#/c/217046/

 Can someone confirms whether it is obsolete or not?
 if it is obsolete, can we remove if it does not break any project?
 so that i can create a bug to track it.

 Needs input on that.

 Thanks,

 Chandan Kumar


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 The server projects dropped python 2.6 support in kilo, but the libraries
 still run py26 jobs for compat still.  So while it's probably OK to remove
 discover as a test dependency for the server projects, I'd only remove it
 for libraries if they are no longer supporting python  2.7.

 --

 Thanks,

 Matt Riedemann


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Functional testing for Ironic. Interested parties wanted

2015-08-27 Thread Villalovos, John L
Hello,

I'm John Villalovos aka jlvillal on IRC. I am working primarily on the Ironic 
project and have been asked to work on functional testing for Ironic.

My main starting focus will be the openstack/python-ironicclient and 
openstack/ironic projects.

I am trying to find out who else would be interested in working on this and 
hopefully they know more about functional testing than I do, but not a 
requirement :)

If you are interested, please email me or ping me on IRC.  I am usually on 
#openstack-ironic as jlvillal

Hopefully we get enough responses that we could hold an ad-hoc discussion about 
this on IRC with the interested parties and come up with a plan.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] [infra] How to auto-generate stable release notes

2015-08-27 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2015-08-27 11:20:05 +0200:
 Doug Hellmann wrote:
  Excerpts from Robert Collins's message of 2015-08-19 11:04:37 +1200:
  Proposed data structure:
  - create a top level directory in each repo called release-notes
  - within that create a subdirectory called changes.
  - within the release-notes dir we place yaml files containing the
  release note inputs.
  - within the 'changes' subdirectory, the name of the yaml file will be
  the gerrit change id in a canonical form.
 E.g. I1234abcd.yaml
 This serves two purposes: it guarantees file name uniqueness (no
  merge conflicts) and lets us
 determine which release to group it in (the most recent one, in
  case of merge+revert+merge patterns).
  
  We changed this to using a long enough random number as a prefix, with
  a slug value provided by the release note author to help identify what
  is in the file.
  
  I think maybe 8 hex digits for the prefix. Or should we go full UUID?
  
  The slug might be something like bug  or removing option foo
  which would be converted to a canonical form, removing whitespace in the
  filename. The slug can be changed, to allow for fixing typos, but the
  prefix needs to remain the same in order for the note to be recognized
  as the same item.
 
 Random hex digit prefix feels overkill and not really more user-friendly
 than ChangeIDs. Robert's initial proposal used ChangeIDs as a way to map
 snippets to commits, and then to a given release. How would that work
 with UUID+slugs ? Wondering if a one-directory-per-release structure
 couldn't work instead.

reno [1] creates the prefix for you automatically, and uses the git logs
to figure out which files belong in which release. So the notes author does
not need to organize the files at all.

 
  2) For each version: scan all the commits to determine gerrit change-id's.
   i) read in all those change ids .yaml files and pull out any notes within 
  them.
   ii) read in any full version yaml file (and merge in its contained notes)
   iii) Construct a markdown document as follows:
a) Sort any preludes (there should be only one at most, but lets not
  error if there are multiple)
  
  Rather than sorting, which would change the order of notes as new items
  are added, what about listing them in an order based on when they were
  added in the history? We can track the first time a note file appears,
  so we can maintain the order even if a note is modified.
 
 +1 That would match the way release notes are produced now (snippets
 sorted in order of addition).
 

Very good, that's what it does now. I still need to work on the
sphinx integration, and python 3 support, but I think version 0.1.1
is starting to be usable for this release.

Doug

[1] https://github.com/dhellmann/reno (I'll be moving this into gerrit
next week)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Code review process in Fuel and related issues

2015-08-27 Thread Evgeniy L
 - SME reviews the code within SLA, which should be defined per component

Also I would like to add, that I'm not against of metrics, we can collect
metrics, in order to figure out if some improvement in the process helped
to speed up reviews, but asking Cores/SMEs to do the job faster will
definitely
affect quality.

So the suggestion is to collect metrics *only* to prove that some
improvement
in the process helped or didn't help.

On Thu, Aug 27, 2015 at 5:58 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Mike,

 I have several comments.

  SLA should be the driver of doing timely reviews, however we can’t
 allow to fast-track code into master suffering quality of review ...

 As for me the idea of SLA contradicts to qualitative reviews.

 Another thing is I got a bit confused by the difference between Core
 Reviewer and Component Lead,
 aren't those the same persons? Shouldn't every Core Reviewer know the
 architecture, best practises
 and participate in design architecture sessions?

  - If core reviewer has not landed the code yet, Component Lead merges
 patch within SLA defined (or declines to merge and provides explanation as
 part of review).

 For example here as far as I'm concerned Component Lead is Core reviewer,
 since
 he has permissions to merge.

 Thanks,


 On Wed, Aug 19, 2015 at 11:31 AM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Hi all,
 let's discuss code review process in Fuel and what we can improve. For
 those who want to just have a quick context of this email, please check out
 presentation slides [5].

 ** Issues **
 Depending on a Fuel subproject, I'm aware of two buckets of issues with
 code review in Fuel:
 a) It is hard to get code reviewed and merged
 b) Quality of code review itself could be better

 First bucket:
 1) It is hard to find subject matter experts who can help and core
 reviewers for the area of code, especially if you are new to the project
 2) Contributor sometimes receives contradicting opinions from other
 reviewers, including cores
 3) Assigned / responsible core reviewer is needed for a feature in order
 to help in architectural negotiations, guiding through, landing the code
 into master
 4) Long wait time for getting code reviewed

 Quality-related items:
 5) Not thorough enough, full review in one shot. For example, reviewer
 can put -1 due to missed comma, but do not notice major gap in the code.
 It leads to many patch sets, and demotivation of contributors
 6) Some of the core reviewers decreased their involvement, and so number
 of reviews has dropped dramatically. However, they still occasionally merge
 code. I propose to remove these cores, and get them back if their
 involvement is increased back again (I very rarely merge code, but I'm one
 of those to be removed from cores). This is standard practice in OpenStack
 community as well, see Neutron as example [4, line 270].
 7) As a legacy of the past, we still have old core reviewers being able
 to merge code in all Fuel repos. All new cores have core rights only for
 single repo, which is their primary area of expertise. For example, core
 team size for fuel-library is adidenko + whole fuel-core group [7]. In
 fact, there are just 4 trusted or real core reviewers in fuel-library,
 not the whole fuel-core group.

 These problems are not new to OpenStack and open source in general. You
 can find discussions about same and similar issues in [1], [2], [3].


 ** Analysis of data **
 In order to understand what can be improved, I mined the data at first.
 Main source of information was stackalytics.com. Please take a look at
 few graphs on slides 4-7 [5], built based on data from stackalytics. Major
 conclusions from these graphs:
 1) Rather small number of core reviewers (in comparison with overall
 number of contributors) reviewing 40-60% of patch sets, depending on repo
 (40% fuel-library, 60% fuel-web). See slide #4.
 2) Load on core reviewers in Fuel team is higher in average, if you
 compare it with some other OpenStack projects. Average load on core
 reviewer across Nova, Keystone, Neutron and Cinder is 2.5 reviews a day. In
 Fuel though it is 3.6 for fuel-web and 4.6 for fuel-library. See slide #6.
 3) Statistics on how fast feedback on code proposed is provided:
 - fuel-library: 2095 total reviews in 30 days [13], 80 open reviews,
 average wait time for reviewer - 1d 1h [12]
 - fuel-web: 1789 total reviews in 30 days [14], 52 open reviews, average
 wait time for reviewer - 1d 17h [15]

 There is no need to have deep analysis on whether we have well defined
 areas of ownership in Fuel components or not: we don’t have it formally
 defined, and it’s not documented anywhere. So, finding a right core
 reviewer can be challenging task for a new contributor to Fuel, and this
 issue has to be addressed.


 ** Proposed solution **
 According to stackalytics, for the whole fuel-group we had 262 reviewers
 with 24 core reviewers for the past 180 days [19]. I think that these
 numbers can be 

Re: [openstack-dev] [heat][horizon] Backward-incompatible changes to the Neutron API

2015-08-27 Thread Paul Michali
Akihiro, can you look at the developer's reference I posted (191944), where
there is the overall API plan and a proposal for handling backward
compatibility.

Thanks!

Paul Michali (pc_m)

On Thu, Aug 27, 2015 at 11:12 AM Akihiro Motoki amot...@gmail.com wrote:

 As Mathias said, Horizon worked (and in many cases works) cross releases.

 Horizon determines supported features based on keystone catalogs,
 extension list from back-end services (like nova, neutron).
 Micro-versioning support may come in future (though it is not supported).

 For backward incompatible API change in VPNaaS, Horizon can determine
 if Neutron (including VPNaaS) provides a way to determines which version
 is available.
 At now, the only way is to expose it through the extension list.

 On the other hand, it is tough to maintain multiple versions of
 implementations.
 It is reasonable to me that Horizon supports two implementation in one or
 two
 release cycle(s) and drop older implementation later.

 Akihiro


 2015-08-27 16:29 GMT+09:00 Matthias Runge mru...@redhat.com:

 On 26/08/15 23:55, James Dempsey wrote:
  Greetings Heat/Horizon Devs,
 
  There is some talk about possibly backward-incompatible changes to the
  Neutron VPNaaS API and I'd like to better understand what that means for
  Heat and Horizon.
 
  It has been proposed to change Neutron VPNService objects such that they
  reference a new resource type called an Endpoint Group instead of
  simply a Subnet.
 
  Does this mean that any version of Heat/Horizon would only be able to
  support either the old or new Neutron API, or is there some way to allow
  a version of Heat/Horizon to support both?
 
 In the past, Horizon worked cross releases.

 The way horizon works is, it looks out for a networking endpoint in
 keystone catalog. We don't really care, if it's nova or neutron
 answering. The rest should be discoverable via API.
 Horizon uses neutronclient rather than directly talking to neutron by
 using its API interface.

 If you make it discoverable, and you'd add that to neutronclient,
 horizon could support both.

 Matthias




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][L3][dvr][fwaas] FWaaS

2015-08-27 Thread bharath

Hi,

 while testing the fwaas , i found router_info is not getting updated. 
list awlays seems to be empty and getting updated only after the restart 
of fw agent.


This issue resulting empty list while calling 
_get_router_info_list_for_tenant.


i can see some comments as *for routers without an interface - 
get_routers returns the router - but this is not yet populated in 
router_info*
but in my case  even though routers have an interface still the 
router_info is empty.


It seems to be recent breakage as this was working fine in the last month.


Thanks,
bharath
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gerrit downtime on Friday 2015-09-11 at 23:00 UTC

2015-08-27 Thread James E. Blair
On Friday, September 11 at 23:00 UTC Gerrit will be unavailable for
about 30 minutes while we rename some projects.

Existing reviews, project watches, etc, should all be carried
over. Currently, we plan on renaming the following projects:

  stackforge/os-ansible-deployment - openstack/openstack-ansible
  stackforge/os-ansible-specs - openstack/openstack-ansible-specs

  stackforge/solum - openstack/solum
  stackforge/python-solumclient - openstack/python-solumclient
  stackforge/solum-specs - openstack/solum-specs
  stackforge/solum-dashboard - openstack/solum-dashboard
  stackforge/solum-infra-guestagent - openstack/solum-infra-guestagent

  stackforge/magnetodb - openstack/magnetodb
  stackforge/python-magnetodbclient - openstack/python-magnetodbclient
  stackforge/magnetodb-specs - openstack/magnetodb-specs

  stackforge/kolla - openstack/kolla
  stackforge/neutron-powervm - openstack/networking-powervm

This list is subject to change.

The projects in this list have recently become official OpenStack
projects and many of them have been waiting patiently for some time to
be moved from stackforge/ to openstack/.  This is likely to be the last
of the so-called big-tent moves as we plan on retiring the stackforge/
namespace and moving most of the remaining projects into openstack/ [1].

If you have any questions about the maintenance, please reply here or
contact us in #openstack-infra on Freenode.

-Jim

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Re: New API for node create, specifying initial provision state

2015-08-27 Thread Dmitry Tantsur
2015-08-27 18:43 GMT+02:00 Clint Byrum cl...@fewbar.com:

 Excerpts from Lucas Alvares Gomes's message of 2015-08-27 02:40:26 -0700:
  On Wed, Aug 26, 2015 at 11:09 PM, Julia Kreger
  juliaashleykre...@gmail.com wrote:
   My apologies for not expressing my thoughts on this matter
   sooner, however I've had to spend some time collecting my
   thoughts.
  
   To me, it seems like we do not trust our users.  Granted,
   when I say users, I mean administrators who likely know more
   about the disposition and capabilities of their fleet than
   could ever be discovered or inferred via software.
  
   Sure, we have other users, mainly in the form of consumers,
   asking Ironic for hardware to be deployed, but the driver for
   adoption is who feels the least amount of pain.
  
   API versioning aside, I have to ask the community, what is
   more important?
  
   - An inflexible workflow that forces an administrator to
   always have a green field, and to step through a workflow
   that we've dictated, which may not apply to their operational
   scenario, ultimately driving them to write custom code to
   inject new nodes into the database directly, which will
   surely break from time to time, causing them to hate Ironic
   and look for a different solution.
  
   - A happy administrator that has the capabilities to do their
   job (and thus manage the baremetal node wherever it is in the
   operator's lifecycle) in an efficient fashion, thus causing
   them to fall in love with Ironic.
  
 
  I'm sorry, I find the language used in this reply very offensive.
  That's not even a real question, due the alternatives you're basically
  asking the community What's more important, be happy or be sad ? Be
  efficient or not efficient?
 


 Funny, I find your response a bit offensive, as a user of Ironic who has
 been falling in love with it for a couple of years now, and is confused
 by the recent changes to the API that completely ignore me.

 I have _zero_ interest in this workflow. I want my nodes to be available
 as soon as I tell Ironic about them. You've added a step that makes no
 sense to me. Why not just let me create nodes in that state?


Because we don't have a test on a users' experience level in OpenStack in
our node-create command ;) It won't distinguish between you, knowing
precisely what you're doing, and a confused user who picked a wrong command
and is in one step from shooting his/her leg.



 It reminds me of a funny thing Monty Taylor pointed out in the Westin in
 Atlanta. We had to scramble to find our room keys to work the elevator,
 and upon unlocking the elevator, had to then push the floor for that
 room. As he pointed out Why doesn't it just go to my floor now?

 So, I get why you have the workflow, but I don't understand why you didn't
 include a short circuit for your existing users who are _perfectly happy_
 not having the workflow. So now I have to pin to an old API version to
 keep working the way I want, and you will eventually remove that API
 version, and I will proceed to grumble about why I have to change.


Everything I know about API versioning tells me that we won't ever remove a
single API version.



  It's not about an inflexible workflow which dictates what people
  do making them hate the project. It's about finding a common pattern
  for an work flow that will work for all types of machines, it's about
  consistency, it's about keeping the history of what happened to that
  node. When a node is on a specific state you know what it's been
  through so you can easily debug it (i.e an ACTIVE node means that it
  passed through MANAGEABLE - CLEAN* - AVAILABLE - DEPLOY* - ACTIVE.
  Even if some of the states are non-op for a given driver, it's a clear
  path).
 
  Think about our API, it's not that we don't allow vendors to add every
  new features they have to the core part of the API because we don't
  trust them or we think that their shiny features are not worthy. We
  don't do that to make it consistent, to have an abstraction layer that
  will work the same for all types of hardware.
 
  I mean it when I said I want to have a fresh mind to read the proposal
  this new work flow. But I rather read a technical explanation than an
  emotional one. What I want to know for example is what it will look
  like when one register a node in ACTIVE state directly? What about the
  internal driver fields? What about the TFTP/HTTP environment that is
  built as part of the DEPLOY process ? What about the ports in Neutron
  ? and so on...

 Emotions matter to users. You're right that a technical argument helps
 us get our work done efficiently. But don't forget _why Ironic exists_.
 It's not for you to develop on, and it's not just for Nova to talk to.
 It's for your users to handle their datacenter in the wee hours without
 you to hold their hand. Make that hard, get somebody fired or burned
 out, and no technical argument will ever convince them to use Ironic
 again.


You care 

Re: [openstack-dev] [Ironic][Nova] The process around the Nova Ironic driver

2015-08-27 Thread Matt Riedemann



On 8/26/2015 6:20 PM, Michael Davies wrote:

Hey Everyone,

John Villalovos and I have been acting as the Nova-Ironic liaisons,
which mostly means dealing with bugs that have been raised against the
Ironic driver in Nova. So you can understand what we’ve been doing, and
how you can help us do that job better, we’re writing this email to
clarify the process we’re following.

Weekly Bug Scrub: Each week (Tuesday 2300 UTC) John and Michael meet to
go through the results of this query
https://bugs.launchpad.net/nova/+bugs?field.tag=ironicorderby=-idstart=0
to find bugs that we don’t know about, to see what progress has been
happening, and to see if there’s any direct action that needs to be
taken. We record the result of this triage over here
https://wiki.openstack.org/wiki/Nova-Ironic-Bugs

Fix Bugs: If we are able, and have the capacity, we try and fix bugs
ourselves. Where we need it, we seek help from both the Nova and/or
Ironic communities. But finding people to help fix bugs in the Nova
Ironic driver is probably an area we can do better at (*hint* *hint*)

Review Bugs: Once fixes are proposed, we solicit reviews for these
fixes.  Once we’re happy that the proposed solution isn’t completely
bonkers, one of us will +1 that review, and add it to the list of Nova
bugs that need review by Nova:
https://etherpad.openstack.org/p/liberty-nova-priorities-tracking

Attend the Nova team meeting: One of us will attend the weekly IRC
meeting to represent the Ironic team interests within Nova. Nova might
want to discuss new requirements they have on drivers, or to discuss a
bug that has been raised, or to find out, or to communicate, which bugs
the other team feel are important to be addressed before the
ever-looming next release.

Attend the Ironic team Meeting: John will attend the weekly IRC meeting
to raise any issues with the broader team that we become aware of.  It
might be that a new bug has been raised and we need to find someone
willing to take it on, or it may be that an existing bug with a proposed
change is languishing due to a lack of reviews (Michael can’t do that as
2:30am local time is just a little wrong for an IRC meeting :)


So there it is, that's how the Ironic team are supporting the Ironic
driver in Nova.  If you have any questions, or just want to dive in and
fix bugs raised against the driver, you’re most welcome to get in touch
- on IRC I’m ‘mrda’ and John is ‘jlvillal‘ :)

Michael…
--
Michael Davies mich...@the-davies.net mailto:mich...@the-davies.net
Rackspace Cloud Builders Australia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks, this all seems like goodness to me.  I was waiting for the, 
'but, the problem is...'. :)


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] testing for setting the admin password via the libvirt driver

2015-08-27 Thread Matt Riedemann



On 8/25/2015 9:14 AM, Matt Riedemann wrote:

Support to change the admin password on an instance via the libvirt
driver landed in liberty [1] but the hypervisor support matrix wasn't
updated [2].  There is a version restriction in the driver that it won't
work unless you're using at least libvirt 1.2.16.

We should be able to at least update the hypervisor support matrix that
this is supported for libvirt with the version restriction.  markus_z
actually pointed that out in the review of the change to add the support
but it was ignored.

The other thing I was wondering about was testing.  The check/gate queue
jobs with ubuntu 14.04 only have libvirt 1.2.2.

There is the fedora 21 job that runs on the experimental queue and I've
traditionally considered this a place to test out libvirt driver
features that need something newer than 1.2.2, but that only goes up to
libvirt 1.2.9.3 [3].

It looks like you have to get up to fedora 23 to be able to test this
set-admin-password function [4].  In fact it looks like the only major
distro out there right now that supports this new enough version of
libvirt is fc23 [5].

Does anyone fancy getting a f23 job setup in the experimental queue for
nova?  It would be nice to actually be able to test the bleeding edge
features that we put into the driver code.

[1] https://review.openstack.org/#/c/185910/
[2]
http://docs.openstack.org/developer/nova/support-matrix.html#operation_set_admin_password

[3]
http://logs.openstack.org/28/215328/3/check/gate-tempest-dsvm-f21/8e9eae5/logs/rpm-qa.txt.gz

[4] http://rpmfind.net/linux/rpm2html/search.php?query=libvirt
[5] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix



I poked a bit today on a f23 job and there are people working on getting 
an f22 node available for testing in infra.  f23 isn't available until 
2015/10/27 so I guess I'm a bit premature on expecting to get a 
gate-tempest-dsvm-f23 job setup.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] --detailed-description for OpenStack items

2015-08-27 Thread Daniel Speichert
On 8/27/2015 13:23, Tim Bell wrote:

  

 Some project such as cinder include a detailed description option
 where you can include an arbitrary string with a volume to remind the
 admins what the volume is used for.

  

 Has anyone looked at doing something similar for Nova for instances
 and Glance for images ?

  

 In many cases, the names get heavily overloaded with information.

  

 Tim

  

Wouldn't it be appropriate/simple to just specify a metadata like
description=this is what it's used for?

Regards,
Daniel Speichert


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removing unused dependencies like:'discover' module from all projects

2015-08-27 Thread Matt Riedemann



On 8/27/2015 6:26 AM, Chandan kumar wrote:

Hello,

I am packaging 'discover' module 
https://bugzilla.redhat.com/show_bug.cgi?id=1251951  for RDO.
Since, this module is not maintained yet as per this
http://code.google.com/p/unittest-ext/
and this module is used as a test-dependencies in all the projects as
per 'openstack-requirements' module 
https://github.com/openstack/requirements/blob/master/global-requirements.txt#L246
  and i have a discussion with lifeless regarding that
https://github.com/testing-cabal/unittest-ext/issues/96 

and he has proposed a fix on that: https://review.openstack.org/#/c/217046/

Can someone confirms whether it is obsolete or not?
if it is obsolete, can we remove if it does not break any project?
so that i can create a bug to track it.

Needs input on that.

Thanks,

Chandan Kumar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The server projects dropped python 2.6 support in kilo, but the 
libraries still run py26 jobs for compat still.  So while it's probably 
OK to remove discover as a test dependency for the server projects, I'd 
only remove it for libraries if they are no longer supporting python  2.7.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] --detailed-description for OpenStack items

2015-08-27 Thread Tim Bell

Some project such as cinder include a detailed description option where you can 
include an arbitrary string with a volume to remind the admins what the volume 
is used for.

Has anyone looked at doing something similar for Nova for instances and Glance 
for images ?

In many cases, the names get heavily overloaded with information.

Tim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-27 Thread Ben Swartzlander

On 08/27/2015 10:43 AM, Ivan Kolodyazhny wrote:

Hi,

Looks like we need to be able to set AZ per backend. What do you think 
about such option?


I dislike such an option.

The whole premise behind an AZ is that it's a failure domain. The node 
running the cinder services is in exactly one such failure domain. If 
you have 2 backends in 2 different AZs, then the cinder services 
managing those backends should be running on nodes that are also in 
those AZs. If you do it any other way then you create a situation where 
a failure in one AZ causes loss of services in a different AZ, which is 
exactly what the AZ feature is trying to avoid.


If you do the correct thing and run cinder services on nodes in the AZs 
that they're managing then you will never have a problem with the 
one-AZ-per-cinder.conf design we have today.


-Ben




Regards,
Ivan Kolodyazhny

On Mon, Aug 10, 2015 at 7:07 PM, John Griffith 
john.griffi...@gmail.com mailto:john.griffi...@gmail.com wrote:




On Mon, Aug 10, 2015 at 9:24 AM, Dulko, Michal
michal.du...@intel.com mailto:michal.du...@intel.com wrote:

Hi,

In Kilo cycle [1] was merged. It started passing AZ of a
booted VM to Cinder to make volumes appear in the same AZ as
VM. This is certainly a good approach, but I wonder how to
deal with an use case when administrator cares about AZ of a
compute node of the VM, but wants to ignore AZ of volume. Such
case would be when fault tolerance of storage is maintained on
another level - for example using Ceph replication and failure
domains.

Normally I would simply disable AvailabilityZoneFilter in
cinder.conf, but it turns out cinder-api validates if
availability zone is correct [2]. This means that if Cinder
has no AZs configured all requests from Nova will fail on an
API level.

Configuring fake AZs in Cinder is also problematic, because AZ
cannot be configured on a per-backend manner. I can only
configure it per c-vol node, so I would need N extra nodes
running c-vol,  where N is number of AZs to achieve that.

Is there any solution to satisfy such use case?

[1] https://review.openstack.org/#/c/157041
[2]

https://github.com/openstack/cinder/blob/master/cinder/volume/flows/api/create_volume.py#L279-L282


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​Seems like we could introduce the capability in cinder to ignore
that if it's desired?  It would probably be worth looking on the
Cinder side at being able to configure multiple AZ's for a volume
(perhaps even an aggregate Zone just for Cinder).  That way we
still honor the setting but provide a way to get around it for
those that know what they're doing.

John


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [rally] [sahara] [heat] [congress] [tripleo] ceilometer in gate jobs

2015-08-27 Thread Tim Hinrichs
I pushed a patch for Congress dependent on your patch.

https://review.openstack.org/#/c/217765/

Tim


On Thu, Aug 27, 2015 at 8:05 AM Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hi,

 I think filing the cross-project bug is ok. I've already uploaded patch
 for sahara jobs - https://review.openstack.org/217751

 Thanks.

 On Wed, Aug 26, 2015 at 6:46 PM, Chris Dent chd...@redhat.com wrote:


 [If any of this is wrong I hope someone from infra or qa will
 correct me. Thanks. This feels a bit cumbersome so perhaps there is
 a way to do it in a more automagic fashion[1].]

 In the near future ceilometer will be removing itself from the core
 of devstack and using a plugin instead. This is to allow more
 independent control and flexibility.

 These are the related reviews:

 * remove from devstack: https://review.openstack.org/196383
 * updated jenkins jobs: https://review.openstack.org/196446

 If a project is using ceilometer in its gate jobs then before the
 above can merge adjustments need to be made to make sure that the
 ceilometer plugin is enabled. The usual change for this would be a
 form of:

   DEVSTACK_LOCAL_CONFIG+=$'\n'enable_plugin ceilometer git://
 git.openstack.org/openstack/ceilometer

 I'm not entirely clear on what we will need to do coordinate this,
 but it is clear some coordination will need to be done such that
 ceilometer remains in devstack until everything that is using
 ceilometer in devstack is ready to use the plugin.

 A grep through the jenkins jobs suggests that the projects in
 $SUBJECT (rally, sahara, heat, congress, tripleo) will need some
 changes.

 How shall we proceed with this?

 One option is for project team members[2] to make a stack of dependent
 patches that are dependent on 196446 above (which itself is dependent
 on 196383) so that it all happens in one fell swoop.

 What are the other options?

 Thanks for your input.

 [1] That is, is it worth considering adding functionality to
 devstack's sense of enabled such that if a service is enabled
 devstack knows how to look for a plugin if it doesn't find local
 support. With the destruction of the stackforge namespace we can
 perhaps guess the git URL for plugins?

 [2] Or me if that's better.

 --
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] New API Guidelines Ready for Cross Project Review

2015-08-27 Thread Everett Toews
Hi All,

The following API guidelines are ready for cross project review. They will be 
merged on Sept. 4 if there's no further feedback.

1. Add description of pagination parameters
https://review.openstack.org/#/c/190743/

2. Require OpenStack- in headers
https://review.openstack.org/#/c/215683/

3. Add the condition for using a project term
https://review.openstack.org/#/c/208264/

4. Added note about caching of responses when using https
https://review.openstack.org/#/c/185288/

5. add section describing 501 common mistake
https://review.openstack.org/#/c/183456/

Cheers,
Everett


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Re: New API for node create, specifying initial provision state

2015-08-27 Thread Clint Byrum
Excerpts from Lucas Alvares Gomes's message of 2015-08-27 02:40:26 -0700:
 On Wed, Aug 26, 2015 at 11:09 PM, Julia Kreger
 juliaashleykre...@gmail.com wrote:
  My apologies for not expressing my thoughts on this matter
  sooner, however I've had to spend some time collecting my
  thoughts.
 
  To me, it seems like we do not trust our users.  Granted,
  when I say users, I mean administrators who likely know more
  about the disposition and capabilities of their fleet than
  could ever be discovered or inferred via software.
 
  Sure, we have other users, mainly in the form of consumers,
  asking Ironic for hardware to be deployed, but the driver for
  adoption is who feels the least amount of pain.
 
  API versioning aside, I have to ask the community, what is
  more important?
 
  - An inflexible workflow that forces an administrator to
  always have a green field, and to step through a workflow
  that we've dictated, which may not apply to their operational
  scenario, ultimately driving them to write custom code to
  inject new nodes into the database directly, which will
  surely break from time to time, causing them to hate Ironic
  and look for a different solution.
 
  - A happy administrator that has the capabilities to do their
  job (and thus manage the baremetal node wherever it is in the
  operator's lifecycle) in an efficient fashion, thus causing
  them to fall in love with Ironic.
 
 
 I'm sorry, I find the language used in this reply very offensive.
 That's not even a real question, due the alternatives you're basically
 asking the community What's more important, be happy or be sad ? Be
 efficient or not efficient?
 


Funny, I find your response a bit offensive, as a user of Ironic who has
been falling in love with it for a couple of years now, and is confused
by the recent changes to the API that completely ignore me.

I have _zero_ interest in this workflow. I want my nodes to be available
as soon as I tell Ironic about them. You've added a step that makes no
sense to me. Why not just let me create nodes in that state?

It reminds me of a funny thing Monty Taylor pointed out in the Westin in
Atlanta. We had to scramble to find our room keys to work the elevator,
and upon unlocking the elevator, had to then push the floor for that
room. As he pointed out Why doesn't it just go to my floor now?

So, I get why you have the workflow, but I don't understand why you didn't
include a short circuit for your existing users who are _perfectly happy_
not having the workflow. So now I have to pin to an old API version to
keep working the way I want, and you will eventually remove that API
version, and I will proceed to grumble about why I have to change.

 It's not about an inflexible workflow which dictates what people
 do making them hate the project. It's about finding a common pattern
 for an work flow that will work for all types of machines, it's about
 consistency, it's about keeping the history of what happened to that
 node. When a node is on a specific state you know what it's been
 through so you can easily debug it (i.e an ACTIVE node means that it
 passed through MANAGEABLE - CLEAN* - AVAILABLE - DEPLOY* - ACTIVE.
 Even if some of the states are non-op for a given driver, it's a clear
 path).
 
 Think about our API, it's not that we don't allow vendors to add every
 new features they have to the core part of the API because we don't
 trust them or we think that their shiny features are not worthy. We
 don't do that to make it consistent, to have an abstraction layer that
 will work the same for all types of hardware.
 
 I mean it when I said I want to have a fresh mind to read the proposal
 this new work flow. But I rather read a technical explanation than an
 emotional one. What I want to know for example is what it will look
 like when one register a node in ACTIVE state directly? What about the
 internal driver fields? What about the TFTP/HTTP environment that is
 built as part of the DEPLOY process ? What about the ports in Neutron
 ? and so on...

Emotions matter to users. You're right that a technical argument helps
us get our work done efficiently. But don't forget _why Ironic exists_.
It's not for you to develop on, and it's not just for Nova to talk to.
It's for your users to handle their datacenter in the wee hours without
you to hold their hand. Make that hard, get somebody fired or burned
out, and no technical argument will ever convince them to use Ironic
again.

I think I see the problem though. Ironic needs a new mission statement:

To produce an OpenStack service and associated libraries capable of
managing and provisioning physical machines, and to do this in a
security-aware and fault-tolerant manner.

Mission accomplished. It's been capable of doing that for a long time.
Perhaps the project should rethink whether _users_ should be considered
in a new mission statement.

__
OpenStack 

Re: [openstack-dev] [puppet][keystone] Keystone resource naming with domain support - no '::domain' if 'Default'

2015-08-27 Thread Rich Megginson

On 08/27/2015 07:00 AM, Gilles Dubreuil wrote:


On 27/08/15 22:40, Gilles Dubreuil wrote:


On 27/08/15 16:59, Gilles Dubreuil wrote:


On 26/08/15 06:30, Rich Megginson wrote:

This concerns the support of the names of domain scoped Keystone
resources (users, projects, etc.) in puppet.

At the puppet-openstack meeting today [1] we decided that
puppet-openstack will support Keystone domain scoped resource names
without a '::domain' in the name, only if the 'default_domain_id'
parameter in Keystone has _not_ been set.  That is, if the default
domain is 'Default'.  In addition:

* In the OpenStack L release, if 'default_domain_id' is set, puppet will
issue a warning if a name is used without '::domain'.

The default domain is always set to 'default' unless overridden to
something else.

Just to clarify, I don't see any logical difference between the
default_domain_id to be 'default' or something else.


There is, however, a difference between explicitly setting the value to 
something other than 'default', and not setting it at all.


That is, if a user/operator specifies

  keystone_domain { 'someotherdomain':
is_default = true,
  }

then the user/operation is explicitly telling puppet-keystone that a 
non-default domain is being used, and that the user/operator is aware of 
domains, and will create domain scoped resources with the '::domain' in 
the name.




Per keystone.conf comment (as seen below) the default_domain_id,
whatever its value, is created as a valid domain.

# This references the domain to use for all Identity API v2 requests
(which are not aware of domains). A domain with this ID will be created
for you by keystone-manage db_sync in migration 008. The domain
referenced by this ID cannot be deleted on the v3 API, to prevent
accidentally breaking the v2 API. There is nothing special about this
domain, other than the fact that it must exist to order to maintain
support for your v2 clients. (string value)
#default_domain_id = default

To be able to test if a 'default_domain_id' is set or not, actually
translates to checking if the id is 'default' or something else.


Not exactly.  There is a difference between explicitly setting the 
value, and implicitly relying on the default 'default' value.



But I don't see the point here. If a user decides to change default' to
'This_is_the_domain_id_for_legacy_v2, how does this help?


If the user changes that, then that means the user has also decided to 
explicitly provided '::domain' in all domain scoped resource names.




If that makes sense then I would actually avoid the intermediate stage:

* In OpenStack L release:
Puppet will issue a warning if a name is used without '::domain'.

* From Openstack M release:
A name must be used with '::domain'.


* In the OpenStack M release, puppet will issue a warning if a name is
used without '::domain', even if 'default_domain_id' is not set.

Therefore the 'default_domain_id' is never 'not set'.


* In N (or possibly, O), resource names will be required to have
'::domain'.


I understand, from Openstack N release and ongoing, the domain would be
mandatory.

So I would like to revisit the list:

* In OpenStack L release:
   Puppet will issue a warning if a name is used without '::domain'.

* In OpenStack M release:
   Puppet will issue a warning if a name is used without '::domain'.

* From Openstack N release:
   A name must be used with '::domain'.



+1


The current spec [2] and current code [3] try to support names without a
'::domain' in the name, in non-default domains, provided the name is
unique across _all_ domains.  This will have to be changed in the
current code and spec.


Ack


[1]
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-08-25-15.01.html

[2]
http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/api-v3-support.html

[3]
https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_user/openstack.rb#L217



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-27 Thread Dulko, Michal
There were a little IRC discussion on that [1] and I've started to work on 
creating a spec for Mitaka. I've got a little busy last time, but finishing it 
is still in my backlog. I'll make sure to post it up for reviews once Mitaka 
specs bucket will open.

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/%23openstack-cinder.2015-08-11.log.html#t2015-08-11T14:48:49

 -Original Message-
 From: Ivan Kolodyazhny [mailto:e...@e0ne.info]
 Sent: Thursday, August 27, 2015 4:44 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Rogon, Kamil
 Subject: Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability
 zones
 
 Hi,
 
 Looks like we need to be able to set AZ per backend. What do you think
 about such option?
 
 
 Regards,
 Ivan Kolodyazhny
 
 On Mon, Aug 10, 2015 at 7:07 PM, John Griffith john.griffi...@gmail.com
 mailto:john.griffi...@gmail.com  wrote:
 
 
 
 
   On Mon, Aug 10, 2015 at 9:24 AM, Dulko, Michal
 michal.du...@intel.com mailto:michal.du...@intel.com  wrote:
 
 
   Hi,
 
   In Kilo cycle [1] was merged. It started passing AZ of a booted
 VM to Cinder to make volumes appear in the same AZ as VM. This is certainly
 a good approach, but I wonder how to deal with an use case when
 administrator cares about AZ of a compute node of the VM, but wants to
 ignore AZ of volume. Such case would be when fault tolerance of storage is
 maintained on another level - for example using Ceph replication and failure
 domains.
 
   Normally I would simply disable AvailabilityZoneFilter in
 cinder.conf, but it turns out cinder-api validates if availability zone is 
 correct
 [2]. This means that if Cinder has no AZs configured all requests from Nova
 will fail on an API level.
 
   Configuring fake AZs in Cinder is also problematic, because AZ
 cannot be configured on a per-backend manner. I can only configure it per c-
 vol node, so I would need N extra nodes running c-vol,  where N is number
 of AZs to achieve that.
 
   Is there any solution to satisfy such use case?
 
   [1] https://review.openstack.org/#/c/157041
   [2]
 https://github.com/openstack/cinder/blob/master/cinder/volume/flows/ap
 i/create_volume.py#L279-L282
 
 
   
 __
   OpenStack Development Mailing List (not for usage
 questions)
   Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe http://OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-
 bin/mailman/listinfo/openstack-dev
 
 
 
   ​Seems like we could introduce the capability in cinder to ignore that 
 if
 it's desired?  It would probably be worth looking on the Cinder side at being
 able to configure multiple AZ's for a volume (perhaps even an aggregate
 Zone just for Cinder).  That way we still honor the setting but provide a way
 to get around it for those that know what they're doing.
 
 
   John
 
 
   
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe http://OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Code review process in Fuel and related issues

2015-08-27 Thread Evgeniy L
Hi Mike,

I have several comments.

 SLA should be the driver of doing timely reviews, however we can’t allow
to fast-track code into master suffering quality of review ...

As for me the idea of SLA contradicts to qualitative reviews.

Another thing is I got a bit confused by the difference between Core
Reviewer and Component Lead,
aren't those the same persons? Shouldn't every Core Reviewer know the
architecture, best practises
and participate in design architecture sessions?

 - If core reviewer has not landed the code yet, Component Lead merges
patch within SLA defined (or declines to merge and provides explanation as
part of review).

For example here as far as I'm concerned Component Lead is Core reviewer,
since
he has permissions to merge.

Thanks,


On Wed, Aug 19, 2015 at 11:31 AM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Hi all,
 let's discuss code review process in Fuel and what we can improve. For
 those who want to just have a quick context of this email, please check out
 presentation slides [5].

 ** Issues **
 Depending on a Fuel subproject, I'm aware of two buckets of issues with
 code review in Fuel:
 a) It is hard to get code reviewed and merged
 b) Quality of code review itself could be better

 First bucket:
 1) It is hard to find subject matter experts who can help and core
 reviewers for the area of code, especially if you are new to the project
 2) Contributor sometimes receives contradicting opinions from other
 reviewers, including cores
 3) Assigned / responsible core reviewer is needed for a feature in order
 to help in architectural negotiations, guiding through, landing the code
 into master
 4) Long wait time for getting code reviewed

 Quality-related items:
 5) Not thorough enough, full review in one shot. For example, reviewer can
 put -1 due to missed comma, but do not notice major gap in the code. It
 leads to many patch sets, and demotivation of contributors
 6) Some of the core reviewers decreased their involvement, and so number
 of reviews has dropped dramatically. However, they still occasionally merge
 code. I propose to remove these cores, and get them back if their
 involvement is increased back again (I very rarely merge code, but I'm one
 of those to be removed from cores). This is standard practice in OpenStack
 community as well, see Neutron as example [4, line 270].
 7) As a legacy of the past, we still have old core reviewers being able to
 merge code in all Fuel repos. All new cores have core rights only for
 single repo, which is their primary area of expertise. For example, core
 team size for fuel-library is adidenko + whole fuel-core group [7]. In
 fact, there are just 4 trusted or real core reviewers in fuel-library,
 not the whole fuel-core group.

 These problems are not new to OpenStack and open source in general. You
 can find discussions about same and similar issues in [1], [2], [3].


 ** Analysis of data **
 In order to understand what can be improved, I mined the data at first.
 Main source of information was stackalytics.com. Please take a look at
 few graphs on slides 4-7 [5], built based on data from stackalytics. Major
 conclusions from these graphs:
 1) Rather small number of core reviewers (in comparison with overall
 number of contributors) reviewing 40-60% of patch sets, depending on repo
 (40% fuel-library, 60% fuel-web). See slide #4.
 2) Load on core reviewers in Fuel team is higher in average, if you
 compare it with some other OpenStack projects. Average load on core
 reviewer across Nova, Keystone, Neutron and Cinder is 2.5 reviews a day. In
 Fuel though it is 3.6 for fuel-web and 4.6 for fuel-library. See slide #6.
 3) Statistics on how fast feedback on code proposed is provided:
 - fuel-library: 2095 total reviews in 30 days [13], 80 open reviews,
 average wait time for reviewer - 1d 1h [12]
 - fuel-web: 1789 total reviews in 30 days [14], 52 open reviews, average
 wait time for reviewer - 1d 17h [15]

 There is no need to have deep analysis on whether we have well defined
 areas of ownership in Fuel components or not: we don’t have it formally
 defined, and it’s not documented anywhere. So, finding a right core
 reviewer can be challenging task for a new contributor to Fuel, and this
 issue has to be addressed.


 ** Proposed solution **
 According to stackalytics, for the whole fuel-group we had 262 reviewers
 with 24 core reviewers for the past 180 days [19]. I think that these
 numbers can be considered as high enough in order to think about structure
 in which code review process would be transparent, understandable and
 scalable.

 Let’s first agree on the terminology which I’d like to use. It can take
 pages of precise definitions, however in this email thread I’d like to
 focus on code review process more, and hopefully high level description of
 roles would be enough for now.
 - Contributor: new contributor, who doesn’t work on Fuel regularly and
 doesn’t know team structure (or full time Fuel developer, who 

Re: [openstack-dev] [magnum] versioned objects changes

2015-08-27 Thread Hongbin Lu
-1 from me.

IMHO, the rolling upgrade feature makes sense for a mature project (like Nova), 
but not for a young project like Magnum. It incurs overheads for contributors  
reviewers to check the object compatibility in each patch. As you mentioned, 
the key benefit of this feature is supporting different version of magnum 
components running at the same time (i.e. running magnum-api 1.0 with 
magnum-conductor 1.1). I don't think supporting this advanced use case is a 
must at the current stage.

However, I don't mean to against merging patches of this feature. I just 
disagree to enforce the rule of object version change in the near future.

Best regards,
Hongbin

From: Grasza, Grzegorz [mailto:grzegorz.gra...@intel.com]
Sent: August-26-15 4:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] versioned objects changes

Hi,

I noticed that right now, when we make changes (adding/removing fields) in 
https://github.com/openstack/magnum/tree/master/magnum/objects , we don't 
change object versions.

The idea of objects is that each change in their fields should be versioned, 
documentation about the change should also be written in a comment inside the 
object and the obj_make_compatible method should be implemented or updated. See 
an example here:
https://github.com/openstack/nova/commit/ad6051bb5c2b62a0de6708cd2d7ac1e3cfd8f1d3#diff-7c6fefb09f0e1b446141d4c8f1ac5458L27

The question is, do you think magnum should support rolling upgrades from next 
release or maybe it's still too early?

If yes, I think core reviewers should start checking for these incompatible 
changes.

To clarify, rolling upgrades means support for running magnum services at 
different versions at the same time.
In Nova, there is an RPC call in the conductor to backport objects, which is 
called when older code gets an object it doesn't understand. This patch does 
this in Magnum: https://review.openstack.org/#/c/184791/ .

I can report bugs and propose patches with version changes for this release, to 
get the effort started.

In Mitaka, when Grenade gets multi-node support, it can be used to add CI tests 
for rolling upgrades in Magnum.


/ Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon] Backward-incompatible changes to the Neutron API

2015-08-27 Thread Akihiro Motoki
As Mathias said, Horizon worked (and in many cases works) cross releases.

Horizon determines supported features based on keystone catalogs,
extension list from back-end services (like nova, neutron).
Micro-versioning support may come in future (though it is not supported).

For backward incompatible API change in VPNaaS, Horizon can determine
if Neutron (including VPNaaS) provides a way to determines which version is
available.
At now, the only way is to expose it through the extension list.

On the other hand, it is tough to maintain multiple versions of
implementations.
It is reasonable to me that Horizon supports two implementation in one or
two
release cycle(s) and drop older implementation later.

Akihiro


2015-08-27 16:29 GMT+09:00 Matthias Runge mru...@redhat.com:

 On 26/08/15 23:55, James Dempsey wrote:
  Greetings Heat/Horizon Devs,
 
  There is some talk about possibly backward-incompatible changes to the
  Neutron VPNaaS API and I'd like to better understand what that means for
  Heat and Horizon.
 
  It has been proposed to change Neutron VPNService objects such that they
  reference a new resource type called an Endpoint Group instead of
  simply a Subnet.
 
  Does this mean that any version of Heat/Horizon would only be able to
  support either the old or new Neutron API, or is there some way to allow
  a version of Heat/Horizon to support both?
 
 In the past, Horizon worked cross releases.

 The way horizon works is, it looks out for a networking endpoint in
 keystone catalog. We don't really care, if it's nova or neutron
 answering. The rest should be discoverable via API.
 Horizon uses neutronclient rather than directly talking to neutron by
 using its API interface.

 If you make it discoverable, and you'd add that to neutronclient,
 horizon could support both.

 Matthias




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3][dvr][fwaas] FWaaS with DVR

2015-08-27 Thread Mickey Spiegel
Bump

The FWaaS team would really like some feedback from the DVR side.

Mickey

-Mickey Spiegel/San Jose/IBM wrote: -
To: openstack-dev@lists.openstack.org
From: Mickey Spiegel/San Jose/IBM
Date: 08/19/2015 09:45AM
Subject: [fwaas][dvr] FWaaS with DVR

Currently, FWaaS behaves differently with DVR, applying to only north/south 
traffic, whereas FWaaS on routers in network nodes applies to both north/south 
and east/west traffic. There is a compatibility issue due to the asymmetric 
design of L3 forwarding in DVR, which breaks the connection tracking that FWaaS 
currently relies on.

I started an etherpad where I hope the community can discuss the problem, 
collect multiple possible solutions, and eventually try to reach consensus 
about how to move forward:
https://etherpad.openstack.org/p/FWaaS_with_DVR

I listed every possible solution that I can think of as a starting point. I am 
somewhat new to OpenStack and FWaaS, so please correct anything that I might 
have misrepresented.

Please add more possible solutions and comment on the possible solutions 
already listed.

Mickey




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Code review process in Fuel and related issues

2015-08-27 Thread Igor Marnat
Mike,
speaking of automation, AFAIK Boris Pavlovic introduced some scripts
in Rally which do basic preliminary check of review message, checking
that it's formally correct. It should make life of reviewers a bit
easier, you might want to introduce them in Fuel as well, if not yet.
Regards,
Igor Marnat


On Thu, Aug 27, 2015 at 5:37 PM, Aleksandr Didenko
adide...@mirantis.com wrote:
 Hi,

 I'm all in for any formalization and automation of review process. The only
 concern that I see here is about core reviewers involvement metrics. If we
 succeed in reducing the load on core reviewers, it will mean that core
 reviewers will do less code reviews. This could lead to core reviewer
 demotion.

 - Contributor finds SME to review the code. Ideally, contributor can have
 his/her peers to help with code review first. Contributor doesn’t bother
 SME, if CI has -1 on a patch proposed

 I like the idea with adding reviewers automatically based on MAINTAINERS
 file. In such case we can drop this ^^ part of instruction. It would be nice
 if Jenkins could add reviewers after CI +1, or we can use gerrit dashboard
 for SMEs to not waste their time on review that has not yet passed CI and
 does not have +1 from other reviewers.

 Regards,
 Alex


 On Wed, Aug 19, 2015 at 11:31 AM, Mike Scherbakov mscherba...@mirantis.com
 wrote:

 Hi all,
 let's discuss code review process in Fuel and what we can improve. For
 those who want to just have a quick context of this email, please check out
 presentation slides [5].

 ** Issues **
 Depending on a Fuel subproject, I'm aware of two buckets of issues with
 code review in Fuel:
 a) It is hard to get code reviewed and merged
 b) Quality of code review itself could be better

 First bucket:
 1) It is hard to find subject matter experts who can help and core
 reviewers for the area of code, especially if you are new to the project
 2) Contributor sometimes receives contradicting opinions from other
 reviewers, including cores
 3) Assigned / responsible core reviewer is needed for a feature in order
 to help in architectural negotiations, guiding through, landing the code
 into master
 4) Long wait time for getting code reviewed

 Quality-related items:
 5) Not thorough enough, full review in one shot. For example, reviewer can
 put -1 due to missed comma, but do not notice major gap in the code. It
 leads to many patch sets, and demotivation of contributors
 6) Some of the core reviewers decreased their involvement, and so number
 of reviews has dropped dramatically. However, they still occasionally merge
 code. I propose to remove these cores, and get them back if their
 involvement is increased back again (I very rarely merge code, but I'm one
 of those to be removed from cores). This is standard practice in OpenStack
 community as well, see Neutron as example [4, line 270].
 7) As a legacy of the past, we still have old core reviewers being able to
 merge code in all Fuel repos. All new cores have core rights only for single
 repo, which is their primary area of expertise. For example, core team size
 for fuel-library is adidenko + whole fuel-core group [7]. In fact, there are
 just 4 trusted or real core reviewers in fuel-library, not the whole
 fuel-core group.

 These problems are not new to OpenStack and open source in general. You
 can find discussions about same and similar issues in [1], [2], [3].


 ** Analysis of data **
 In order to understand what can be improved, I mined the data at first.
 Main source of information was stackalytics.com. Please take a look at few
 graphs on slides 4-7 [5], built based on data from stackalytics. Major
 conclusions from these graphs:
 1) Rather small number of core reviewers (in comparison with overall
 number of contributors) reviewing 40-60% of patch sets, depending on repo
 (40% fuel-library, 60% fuel-web). See slide #4.
 2) Load on core reviewers in Fuel team is higher in average, if you
 compare it with some other OpenStack projects. Average load on core reviewer
 across Nova, Keystone, Neutron and Cinder is 2.5 reviews a day. In Fuel
 though it is 3.6 for fuel-web and 4.6 for fuel-library. See slide #6.
 3) Statistics on how fast feedback on code proposed is provided:
 - fuel-library: 2095 total reviews in 30 days [13], 80 open reviews,
 average wait time for reviewer - 1d 1h [12]
 - fuel-web: 1789 total reviews in 30 days [14], 52 open reviews, average
 wait time for reviewer - 1d 17h [15]

 There is no need to have deep analysis on whether we have well defined
 areas of ownership in Fuel components or not: we don’t have it formally
 defined, and it’s not documented anywhere. So, finding a right core reviewer
 can be challenging task for a new contributor to Fuel, and this issue has to
 be addressed.


 ** Proposed solution **
 According to stackalytics, for the whole fuel-group we had 262 reviewers
 with 24 core reviewers for the past 180 days [19]. I think that these
 numbers can be considered as high enough in order to think 

Re: [openstack-dev] [ceilometer] [rally] [sahara] [heat] [congress] [tripleo] ceilometer in gate jobs

2015-08-27 Thread Sergey Lukjanov
Hi,

I think filing the cross-project bug is ok. I've already uploaded patch for
sahara jobs - https://review.openstack.org/217751

Thanks.

On Wed, Aug 26, 2015 at 6:46 PM, Chris Dent chd...@redhat.com wrote:


 [If any of this is wrong I hope someone from infra or qa will
 correct me. Thanks. This feels a bit cumbersome so perhaps there is
 a way to do it in a more automagic fashion[1].]

 In the near future ceilometer will be removing itself from the core
 of devstack and using a plugin instead. This is to allow more
 independent control and flexibility.

 These are the related reviews:

 * remove from devstack: https://review.openstack.org/196383
 * updated jenkins jobs: https://review.openstack.org/196446

 If a project is using ceilometer in its gate jobs then before the
 above can merge adjustments need to be made to make sure that the
 ceilometer plugin is enabled. The usual change for this would be a
 form of:

   DEVSTACK_LOCAL_CONFIG+=$'\n'enable_plugin ceilometer git://
 git.openstack.org/openstack/ceilometer

 I'm not entirely clear on what we will need to do coordinate this,
 but it is clear some coordination will need to be done such that
 ceilometer remains in devstack until everything that is using
 ceilometer in devstack is ready to use the plugin.

 A grep through the jenkins jobs suggests that the projects in
 $SUBJECT (rally, sahara, heat, congress, tripleo) will need some
 changes.

 How shall we proceed with this?

 One option is for project team members[2] to make a stack of dependent
 patches that are dependent on 196446 above (which itself is dependent
 on 196383) so that it all happens in one fell swoop.

 What are the other options?

 Thanks for your input.

 [1] That is, is it worth considering adding functionality to
 devstack's sense of enabled such that if a service is enabled
 devstack knows how to look for a plugin if it doesn't find local
 support. With the destruction of the stackforge namespace we can
 perhaps guess the git URL for plugins?

 [2] Or me if that's better.

 --
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] New API Guidelines Ready for Cross Project Review

2015-08-27 Thread Thierry Carrez
Everett Toews wrote:
 The following API guidelines are ready for cross project review. They will be 
 merged on Sept. 4 if there's no further feedback.
 
 1. Add description of pagination parameters
 https://review.openstack.org/#/c/190743/
 
 2. Require OpenStack- in headers
 https://review.openstack.org/#/c/215683/
 
 3. Add the condition for using a project term
 https://review.openstack.org/#/c/208264/
 
 4. Added note about caching of responses when using https
 https://review.openstack.org/#/c/185288/
 
 5. add section describing 501 common mistake
 https://review.openstack.org/#/c/183456/

Added to cross-project meeting agenda at:

https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

In the future, feel free to edit that directly :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] versioned objects changes

2015-08-27 Thread Ryan Rossiter
If you want my inexperienced opinion, a young project is the perfect 
time to start this. Nova has had a bunch of problems with versioned 
objects that don't get realized until the next release (because that's 
the point in time at which grenade (or worse, operators) catch this). At 
that point, you then need to hack things around and backport them in 
order to get them working in the old branch. [1] is an excellent example 
of Nova having to backport a fix to an object because we weren't using 
strict object testing.


I don't feel that this should be adding overhead to contributors and 
reviewers. With [2], this test absolutely helps both contributors and 
reviewers. Yes, it requires fixing things when a change happens to an 
object. Learning to do this fix to update object hashes is extremely 
easy to do and I hope my updated comment on there makes it even easier 
(also be aware I am new to OpenStack  Nova as of about 2 months ago, so 
this stuff was new to me too not very long ago).


I understand that something like [2] will cause a test to fail when you 
make a major change to a versioned object. But you *want* that. It helps 
reviewers more easily catch contributors to say You need to update the 
version, because the hash changed. The sooner you start using versioned 
objects in the way they are designed, the smaller the upfront cost, and 
it will also be a major savings later on if something like [1] pops up.


[1]: https://bugs.launchpad.net/nova/+bug/1474074
[2]: https://review.openstack.org/#/c/217342/

On 8/27/2015 9:46 AM, Hongbin Lu wrote:


-1 from me.

IMHO, the rolling upgrade feature makes sense for a mature project 
(like Nova), but not for a young project like Magnum. It incurs 
overheads for contributors  reviewers to check the object 
compatibility in each patch. As you mentioned, the key benefit of this 
feature is supporting different version of magnum components running 
at the same time (i.e. running magnum-api 1.0 with magnum-conductor 
1.1). I don’t think supporting this advanced use case is a must at the 
current stage.


However, I don’t mean to against merging patches of this feature. I 
just disagree to enforce the rule of object version change in the near 
future.


Best regards,

Hongbin

*From:*Grasza, Grzegorz [mailto:grzegorz.gra...@intel.com]
*Sent:* August-26-15 4:47 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* [openstack-dev] [magnum] versioned objects changes

Hi,

I noticed that right now, when we make changes (adding/removing 
fields) in 
https://github.com/openstack/magnum/tree/master/magnum/objects , we 
don't change object versions.


The idea of objects is that each change in their fields should be 
versioned, documentation about the change should also be written in a 
comment inside the object and the obj_make_compatible method should be 
implemented or updated. See an example here:


https://github.com/openstack/nova/commit/ad6051bb5c2b62a0de6708cd2d7ac1e3cfd8f1d3#diff-7c6fefb09f0e1b446141d4c8f1ac5458L27

The question is, do you think magnum should support rolling upgrades 
from next release or maybe it's still too early?


If yes, I think core reviewers should start checking for these 
incompatible changes.


To clarify, rolling upgrades means support for running magnum services 
at different versions at the same time.


In Nova, there is an RPC call in the conductor to backport objects, 
which is called when older code gets an object it doesn’t understand. 
This patch does this in Magnum: https://review.openstack.org/#/c/184791/ .


I can report bugs and propose patches with version changes for this 
release, to get the effort started.


In Mitaka, when Grenade gets multi-node support, it can be used to add 
CI tests for rolling upgrades in Magnum.


/ Greg



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Thanks,

Ryan Rossiter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][i18n] Horizon plugins translation

2015-08-27 Thread Andreas Jaeger

On 08/27/2015 08:43 PM, Douglas Fish wrote:

I took a quick look at the projects Daisy listed. None of them are ready
to be translated yet.

*Manila UI and Tuskar UI*
These projects don't have PO/POT files yet. In order to be ready they
need to start with step 1 from Daisy's note.

*Horizon Cisco UI*
Has a locale file
https://github.com/openstack/horizon-cisco-ui/tree/master/horizon_cisco_ui/cisco/locale
I don't see a Horizon-Cisco-UI project in either transifex or zanata
(step 2 from Daisy's note).

I'm looking at this page
https://wiki.openstack.org/wiki/Translations/Infrastructure
and it doesn't seem to be up to date with any Zanata-based information.

How would these teams go about implementing steps 2 and 3 if they want
to be available to be translated?


for 2: Ask one of the transifex admins - best send an email to the 
openstack-i18n list and ask for it.


For 3: Send a change to project config to add the translation jobs. This 
might need some tweaking to the scripts in jenkins/scripts as well since 
horizon like projects are often setup differently than the normal python 
projects,


Andreas


Doug Fish


Ying Chun Guo guoyi...@cn.ibm.com wrote on 08/25/2015 04:21:14 AM:

  From: Ying Chun Guo guoyi...@cn.ibm.com
  To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
  Date: 08/25/2015 04:27 AM
  Subject: [openstack-dev] [horizon][i18n] Horizon plugins translation
 
  Hi,
 
  I see there are several UI plugins in Horizon, which are in
  OpenStack projects.yaml.
  They are: horizon-cisco-ui, manila-ui and tuskar-ui.
  They are using separated repo now.
 
  As the translation coordinator, I want to understand
  which of them want to be translated into multiple languages in
  Liberty release.
  Are they ready to be translated ?
 
  As a separated repo, if these plugins want to be translated,
  they need to:
  1. Create locale folder and generate pot files in the repo
  2. Create project in translation tool
  3. Create auto jobs to upload pot files and download translations
  from translation tool.
 
  Please let me know your thoughts.
 
  Best regards
  Ying Chun Guo (Daisy)
 
__
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] How much do we rely on dnsmasq?

2015-08-27 Thread Sean M. Collins
Hi,

I wanted to ask if we have any opinions on dnsmasq, since I am doing
some hacking on adding IPv6 support to fuel, for the provisioning stage.

https://review.openstack.org/#/c/216787/

Depending on if dnsmasq supports DHCPv6 options for PXE booting, we may
need to investigate replacing it with isc-dhcpd. Which is no small task,
I can imagine.

The spec review has links to a post I made on the dnsmasq mailing list
to see if there is anyone there that can answer my question.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Documentation on how to Start Contributing

2015-08-27 Thread Vahid S Hashemian
Hi Victor,

You are awesome! Thank you. I took your recommendation and was able to set 
up a virtualenv with the local python-muranoclient and saw a simple change 
I made when running the murano command.
I think I have everything I need now to start working on patches.

I just have an efficiency question:

Going back to murano itself, as I mentioned I have a development folder 
for murano under /home/stack/workspace/murano.
If the changes I make involve multiple files I would have to remember each 
time what files were changes to make sure I copy them over under 
/opt/stack/murano before restarting murano-engine daemon.
This can easily cause confusion and is error-prone.

Do you have an advice on how to better handle this?

Thank you for your insights again.

Regards,
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud Labs





From:   Victor Ryzhenkin vryzhen...@mirantis.com
To: Vahid S Hashemian/Silicon Valley/IBM@IBMUS, OpenStack Development 
Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:   08/26/2015 06:15 PM
Subject:Re: [openstack-dev] [Murano] Documentation on how to Start 
Contributing



Hey! 

the code does not seem to be inside the murano git repository, but under 
python-muranoclient.
The code of murano-api located here [1]. The client is just a client for 
using murano api functions from python ;)
But I don't see python-muranoclient files under my deployed devstack
True. This happened because python-muranoclient was installed from PyPi. 
In this case You will find this directory in dist-packages in python dir.
So, I have a two suggestions for You. 
The first: To install non-released client with devstack you need to add 
into your local.conf/localrc the variable 
‘LIBS_FROM_GIT=python-muranoclient’. After deploying devstack you will see 
the python-muranoclient dir and in system will installed the latest client 
from master. You can use MURANO_PYTHONCLIENT_REPO=/home/… to try to 
install client with devstack from your local repository(as you did for 
murano in last tries).
The second: But I recommend you to use python-virtualenv for client tests. 
Install venv, activate venv, and install changed client via pip install -e 
${changed_client_dir}.
I hope, this help you ;)
Best regards!

[1] https://github.com/openstack/murano/tree/master/murano/api

-- 
Victor Ryzhenkin
Junior QA Engeneer
freerunner on #freenode

Включено 27 августа 2015 г. в 3:56:56, Vahid S Hashemian (
vahidhashem...@us.ibm.com) написал:
Hi Victor,

Thanks for pointing out the issue with earlier deployment. Since I took 
your advice I don't run into that problem again.
And thanks for the pointer on how to restart murano daemons. I think I 
understand how to change murano code and test my changes locally.

I have one more question: if I want to make changes to murano-api code, 
the code does not seem to be inside the murano git repository, but under 
python-muranoclient.
But I don't see python-muranoclient files under my deployed devstack so I 
can modify and restart services. An example, would be 
python-muranoclient/muranoclient/v1/shell.py which does not seem to exist 
under /opt/stack.
Am I on the right track? If so, how do I test changes I want to make to 
the api code?

Thank you.

Regards,
- 

Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud Labs






From:Victor Ryzhenkin vryzhen...@mirantis.com
To:Vahid S Hashemian/Silicon Valley/IBM@IBMUS
Cc:OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:08/26/2015 05:28 PM
Subject:Re: [openstack-dev] [Murano] Documentation on how to Start 
Contributing



Wow!

And changed the plugin.sh file back to original. However, with a cleaned 
devstack (./unstack.sh, ./clean.sh, and removed /opt/stack) I still got 
the error I mentioned in my previous post. Full stack log is attached.

Looks like I’ve found this tricky one ;)

In your log:
2015-08-26 21:02:18.010 | + source 
/home/stack/devstack/extras.d/70-murano.sh stack post-config
2015-08-26 21:02:18.010 | ++ is_service_enabled murano
2015-08-26 21:02:18.012 | ++ return 0

And this one:
2015-08-26 21:02:41.481 | + [[ -f /opt/stack/murano/devstack/plugin.sh ]]
2015-08-26 21:02:41.481 | + source /opt/stack/murano/devstack/plugin.sh 
stack post-config

Murano tried to deploy multiple times. I think this happened because you 
using plugin and libs together. Need try to remove murano libs and extras 
from devstack directory(lib/murano; lib/murano-dashboard; 
extras.d/70-murano.sh) or turn off the plugin. We need to use one method 
in one time.

As per your suggestion I was going to test your first suggestion, but I 
was unable to find any murano service running on my server after the 
completion of ./stack.sh (which I tested installed 

Re: [openstack-dev] [ironic] Re: New API for node create, specifying initial provision state

2015-08-27 Thread Lucas Alvares Gomes
Hi,

On Thu, Aug 27, 2015 at 5:43 PM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Lucas Alvares Gomes's message of 2015-08-27 02:40:26 -0700:
 On Wed, Aug 26, 2015 at 11:09 PM, Julia Kreger
 juliaashleykre...@gmail.com wrote:
  My apologies for not expressing my thoughts on this matter
  sooner, however I've had to spend some time collecting my
  thoughts.
 
  To me, it seems like we do not trust our users.  Granted,
  when I say users, I mean administrators who likely know more
  about the disposition and capabilities of their fleet than
  could ever be discovered or inferred via software.
 
  Sure, we have other users, mainly in the form of consumers,
  asking Ironic for hardware to be deployed, but the driver for
  adoption is who feels the least amount of pain.
 
  API versioning aside, I have to ask the community, what is
  more important?
 
  - An inflexible workflow that forces an administrator to
  always have a green field, and to step through a workflow
  that we've dictated, which may not apply to their operational
  scenario, ultimately driving them to write custom code to
  inject new nodes into the database directly, which will
  surely break from time to time, causing them to hate Ironic
  and look for a different solution.
 
  - A happy administrator that has the capabilities to do their
  job (and thus manage the baremetal node wherever it is in the
  operator's lifecycle) in an efficient fashion, thus causing
  them to fall in love with Ironic.
 

 I'm sorry, I find the language used in this reply very offensive.
 That's not even a real question, due the alternatives you're basically
 asking the community What's more important, be happy or be sad ? Be
 efficient or not efficient?



 Funny, I find your response a bit offensive, as a user of Ironic who has
 been falling in love with it for a couple of years now, and is confused
 by the recent changes to the API that completely ignore me.


I'm sorry if you feel like that, I didn't mean to offend anyone.

 I have _zero_ interest in this workflow. I want my nodes to be available
 as soon as I tell Ironic about them. You've added a step that makes no
 sense to me. Why not just let me create nodes in that state?

 It reminds me of a funny thing Monty Taylor pointed out in the Westin in
 Atlanta. We had to scramble to find our room keys to work the elevator,
 and upon unlocking the elevator, had to then push the floor for that
 room. As he pointed out Why doesn't it just go to my floor now?

 So, I get why you have the workflow, but I don't understand why you didn't
 include a short circuit for your existing users who are _perfectly happy_
 not having the workflow. So now I have to pin to an old API version to
 keep working the way I want, and you will eventually remove that API
 version, and I will proceed to grumble about why I have to change.


Sure, I don't think that in any of my replies I have said that I'm
against the idea of having anything like that, quite the opposite,
I've said that I want to have a fresh mind when I hear the proposal;
meaning no prejudgment.

But we have a process to deal with such requests, in Ironic we have a
spec process [1] which an idea have go to through before it's becomes
accepted into the project. The work flow you have zero interest in and
makes no sense to you was the work flow that have been discussed by
the Ironic community in the open as part of the this spec here [2].
I'm sure everyone would appreciate your input on that at the time. But
even now it's not late, the idea of having the short circuit still can
be included to the project so I encourage you to go through the spec
process [1] and propose it.

[1] https://wiki.openstack.org/wiki/Ironic/Specs_Process
[2] https://review.openstack.org/#/c/133828/7

 Emotions matter to users. You're right that a technical argument helps
 us get our work done efficiently. But don't forget _why Ironic exists_.
 It's not for you to develop on, and it's not just for Nova to talk to.
 It's for your users to handle their datacenter in the wee hours without
 you to hold their hand. Make that hard, get somebody fired or burned
 out, and no technical argument will ever convince them to use Ironic
 again.


Emotions matters yes but that's implicit. Nobody will ever be happy if
something doesn't technically work. So, I'm sure the idea that will be
proposed presents technical challenges and we are a technical
community so let's focus on that.

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Stackforge migration on October 17; action required for stackforge projects

2015-08-27 Thread James E. Blair
Hi,

In a previous message[1] I described a plan for moving projects in the
stackforge/ git namespace into openstack/.

We have scheduled this migration for Saturday October 17, 2015.

If you are responsible for a stackforge project, please visit the
following wiki page as soon as possible and add your project to one of
the two lists there:

  https://wiki.openstack.org/wiki/Stackforge_Namespace_Retirement

We would like to have a list of all projects which are still active and
wish to be moved, as well as a list of projects that are no longer
maintained and should be retired.

After that, no further action is required -- the Infrastructure team
will handle the system configuration changes needed to effect the move,
however, you may wish to be available shortly after the move to merge
.gitreview changes and fixes related to any unanticipated problems.

Thanks,

Jim

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][i18n] Horizon plugins translation

2015-08-27 Thread Douglas Fish
I took a quick look at the projects Daisy listed. None of them are ready to
be translated yet.

Manila UI and Tuskar UI
These projects don't have PO/POT files yet. In order to be ready they need
to start with step 1 from Daisy's note.

Horizon Cisco UI
Has a locale file
https://github.com/openstack/horizon-cisco-ui/tree/master/horizon_cisco_ui/cisco/locale
I don't see a Horizon-Cisco-UI project in either transifex or zanata (step
2 from Daisy's note).

I'm looking at this page
https://wiki.openstack.org/wiki/Translations/Infrastructure
and it doesn't seem to be up to date with any Zanata-based information.

How would these teams go about implementing steps 2 and 3 if they want to
be available to be translated?

Doug Fish


Ying Chun Guo guoyi...@cn.ibm.com wrote on 08/25/2015 04:21:14 AM:

 From: Ying Chun Guo guoyi...@cn.ibm.com
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
 Date: 08/25/2015 04:27 AM
 Subject: [openstack-dev] [horizon][i18n] Horizon plugins translation

 Hi,

 I see there are several UI plugins in Horizon, which are in
 OpenStack projects.yaml.
 They are: horizon-cisco-ui, manila-ui and tuskar-ui.
 They are using separated repo now.

 As the translation coordinator, I want to understand
 which of them want to be translated into multiple languages in
 Liberty release.
 Are they ready to be translated ?

 As a separated repo, if these plugins want to be translated,
 they need to:
 1. Create locale folder and generate pot files in the repo
 2. Create project in translation tool
 3. Create auto jobs to upload pot files and download translations
 from translation tool.

 Please let me know your thoughts.

 Best regards
 Ying Chun Guo (Daisy)

__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] --detailed-description for OpenStack items

2015-08-27 Thread Tim Bell
That could be done but we'd need to establish an agreed name so that Horizon or 
the CLIs, for example, could filter based on description. Give me all VMs with 
Ansys in the description.

If we use properties, a consistent approach would be needed so the higher level 
tooling could rely on it (and hide the implementation details). Currently, I 
don't think Horizon lets you set properties for an image or a VM.

Tim


-Original Message-
From: Daniel Speichert [mailto:dan...@speichert.pl] 
Sent: 27 August 2015 19:32
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] --detailed-description for OpenStack items

On 8/27/2015 13:23, Tim Bell wrote:

  

 Some project such as cinder include a detailed description option 
 where you can include an arbitrary string with a volume to remind the 
 admins what the volume is used for.

  

 Has anyone looked at doing something similar for Nova for instances 
 and Glance for images ?

  

 In many cases, the names get heavily overloaded with information.

  

 Tim

  

Wouldn't it be appropriate/simple to just specify a metadata like 
description=this is what it's used for?

Regards,
Daniel Speichert


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][keystone][openstackclient] Standards for object name attributes and filtering

2015-08-27 Thread Everett Toews
On Aug 26, 2015, at 4:45 AM, Henry Nash hen...@linux.vnet.ibm.com wrote:

 Hi
 
 With keystone, we recently came across an issue in terms of the assumptions 
 that the openstack client is making about the entities it can show - namely 
 that is assumes all entries have a ‘name’ attribute (which is how the 
 openstack show command works). Turns out, that not all keystone entities 
 have such an attribute (e.g. IDPs for federation) - often the ID is really 
 the name. Is there already agreement across our APIs that all first class 
 entities should have a ‘name’ attribute?  If we do, then we need to change 
 keystone, if not, then we need to change openstack client to not make this 
 assumption (and perhaps allow some kind of per-entity definition of which 
 attribute should be used for ‘show’).

AFAICT, there’s no such agreement in the API WG guidelines [1].

 A follow on (and somewhat related) question to this, is whether we have 
 agreed standards for what should happen if some provides an unrecognized 
 filter to a list entities API request at the http level (this is related 
 since this is also the hole osc fell into with keystone since, again, ‘name’ 
 is not a recognized filter attribute). Currently keystone ignores filters it 
 doesn’t understand (so if that was your only filter, you would get back all 
 the entities). The alternative approach would of course be to return no 
 entities if the filter is on an attribute we don’t recognize (or even issue a 
 validation or bad request exception).  Again, the question is whether we have 
 agreement across the projects for how such unrecognized filtering should be 
 handled?

The closest thing we have is the Filtering guideline [2] but it doesn’t account 
for this particular case.

Client tool developers would be quite frustrated by a service ignoring filters 
it doesn’t understand or returning no entities if the filter isn’t recognized. 
In both cases, the developer isn’t getting the expected result but you’re 
masking the error made by the developer. 

Much better to return a 400 so the problem can be fixed immediately. Somewhat 
related is this draft [3].

Everett

[1] http://specs.openstack.org/openstack/api-wg/
[2] 
http://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html#filtering
[3] https://tools.ietf.org/html/draft-thomson-postel-was-wrong-00


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][versionedobjects][ceilometer] explain the benefits of ceilometer+versionedobjects

2015-08-27 Thread gord chung

hi,

there has been a lot of work done across the community and Ceilometer 
relating to versionedobjects. in Ceilometer particularly, this effort 
has somewhat stalled as contributors are unsure of the benefits of 
versionedobjects and how it relates to Ceilometer. there was a little 
skeptism because it was originally sold as magic, but reading the slides 
from Vancouver[1], it is not magic. rather it seems the main purpose is 
to handle the evolution of schemas specifically over RPC which seems 
neat but conceptually doesn't seem to fit into how Ceilometer functions.


looking at the patches, Chris brought up a good question in a review[2] 
which to summarise:


If the ceilometer/aodh tools have direct connections to their data 
storage level (they do) and do not use storable distributed objects 
(on which rpc calls are made) in what sense are versioned objects 
useful to the service?


My understanding is that in Nova (for example) versioned objects are 
useful because rpc calls are made on storable objects that can be in 
flight at any time across the distributed service and thus for their 
to be smooth rolling upgrades those in flight objects need to be able 
to be of different versions.


Ceilometer functions mainly on queue-based IPC. most of the 
communication is async transferring of json payloads where callback is 
not required. the basic workflows are:


polling agent --- topic queue --- notification agent --- topic queue 
--- collector (direct connection to db)

or
OpenStack service --- topic queue --- notification agent --- topic 
queue --- collector (direct connection to db)

or
from Aodh/alarming pov:
ceilometer-api (direct connection to db) --- http --- alarm evaluator 
--- rpc --- alarm notifier --- http --- [Heat/other]


based on the above workflows, is there a good place for adoption of 
versionedobjects? and if so, what is the benefit? most of us are keen on 
adopting consistent design practices but none of us can honestly 
determine why versionedobjects would be beneficial to Ceilometer. if 
someone could explain it to us like we are 5 -- it's probably best to 
explain everything/anything like i'm 5 -- that would help immensely on 
moving this work forward.


cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] upcoming glanceclient release

2015-08-27 Thread Nikhil Komawar

As a part of our continued effort to make the v2 primary API and get
people to consume it without confusion we are planning to move ahead
with the client release (the release would set the default version of
API to 2). There haven't been any major/minor raised here.

An issue regarding the possible impact of this release due to major bump
was raised during the morning meeting however, the client release should
follow semver semantics and indicate the same. A corresponding review
for release notes exists that should merge before the release. This
medium of communication seems enough; it follows the prescription for
necessary communication. I don't seem to find a definition for necessary
and sufficient media for communicating this information so will take
what we usually follow.

There are a few bugs [1] that could be considered as part of this
release but do not seem to be blockers. In order to accommodate the
deadlines of the release milestones and impact of releases in the
upcoming week to other projects, we can continue to fix bugs and release
them as a part of 1.x.x releases sooner than later as time and resource
permit. Also, the high ones can be part of the stable/* backports if
needed but the description has only shell impact so there isn't a strong
enough reason.

So, we need to move ahead with this release for Liberty.

[1]
https://bugs.launchpad.net/python-glanceclient/+bugs?field.tag=1.0.0-potential

On 8/25/15 12:15 PM, Nikhil Komawar wrote:
 Hi,

 We are planning to cut a client release this Thursday by 1500UTC or so.
 If there are any reviews that you absolutely need and are likely to not
 break the client in the near future, please ping me (nikhil_k) or jokke_
 on IRC #openstack-glance.

 This will most likely be our final client release for Liberty.


-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][manila] latest microversion considered dangerous

2015-08-27 Thread Ben Swartzlander
Manila recently implemented microversions, copying the implementation 
from Nova. I really like the feature! However I noticed that it's legal 
for clients to transmit latest instead of a real version number.


THIS IS A TERRIBLE IDEA!

I recommend removing support for latest and forcing clients to request 
a specific version (or accept the default).


Allowing clients to request the latest microversion guarantees 
undefined (and likely broken) behavior* in every situation where a 
client talks to a server that is newer than it.


Every client can only understand past and present API implementation, 
not future implementations. Transmitting latest implies an assumption 
that the future is not so different from the present. This assumption 
about future behavior is precisely what we don't want clients to make, 
because it prevents forward progress. One of the main reasons 
microversions is a valuable feature is because it allows forward 
progress by letting us make major changes without breaking old clients.


If clients are allowed to assume that nothing will change too much in 
the future (which is what asking for latest implies) then the server 
will be right back in the situation it was trying to get out of -- it 
can never change any API in a way that might break old clients.


I can think of no situation where transmitting latest is better than 
transmitting the highest version that existed at the time the client was 
written.


-Ben Swartzlander

* Undefined/broken behavior unless the server restricts itself to never 
making any backward-compatiblity-breaking change of any kind.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Cloud Foundry service broker question

2015-08-27 Thread Dmitry
I would say to extend murano with additional capabilities.
Dependency management for composite applications is very important for
modern development so, I think, adding additional use-cases could be very
benifitial for Murano.
On Aug 27, 2015 2:53 PM, Nikolay Starodubtsev nstarodubt...@mirantis.com
wrote:

 Dmitry,
 Does I understand properly and your recommendation is to change some
 murano logic?



 Nikolay Starodubtsev

 Software Engineer

 Mirantis Inc.


 Skype: dark_harlequine1

 2015-08-24 23:31 GMT+03:00 Dmitry mey...@gmail.com:

 I think that you can model application dependencies in a way it will
 allow multi-step provisioning and further  maintenance of each component.
 The example of such modeling could be seen in OASIS TOSCA.
 On Aug 24, 2015 6:19 PM, Nikolay Starodubtsev 
 nstarodubt...@mirantis.com wrote:

 Hi all,
 Today I and Stan Lagun discussed a question How we can provision
 complex murano app through Cloud Foundry?
 Here you can see logs from #murano related to this discussion:
 http://eavesdrop.openstack.org/irclogs/%23murano/%23murano.2015-08-24.log.html#t2015-08-24T09:53:01

 So, the only way we see now is to provision apps which have dependencies
 is step by step provisioning with manually updating JSON files each
 iteration. We appreaciate any ideas.
 Here is the link for review:
 https://review.openstack.org/#/c/196820/





 Nikolay Starodubtsev

 Software Engineer

 Mirantis Inc.


 Skype: dark_harlequine1


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][keystone][openstackclient] Standards for object name attributes and filtering

2015-08-27 Thread Morgan Fainberg
On Thu, Aug 27, 2015 at 11:47 AM, Everett Toews everett.to...@rackspace.com
 wrote:

 On Aug 26, 2015, at 4:45 AM, Henry Nash hen...@linux.vnet.ibm.com wrote:

  Hi
 
  With keystone, we recently came across an issue in terms of the
 assumptions that the openstack client is making about the entities it can
 show - namely that is assumes all entries have a ‘name’ attribute (which is
 how the openstack show command works). Turns out, that not all keystone
 entities have such an attribute (e.g. IDPs for federation) - often the ID
 is really the name. Is there already agreement across our APIs that all
 first class entities should have a ‘name’ attribute?  If we do, then we
 need to change keystone, if not, then we need to change openstack client to
 not make this assumption (and perhaps allow some kind of per-entity
 definition of which attribute should be used for ‘show’).

 AFAICT, there’s no such agreement in the API WG guidelines [1].

  A follow on (and somewhat related) question to this, is whether we have
 agreed standards for what should happen if some provides an unrecognized
 filter to a list entities API request at the http level (this is related
 since this is also the hole osc fell into with keystone since, again,
 ‘name’ is not a recognized filter attribute). Currently keystone ignores
 filters it doesn’t understand (so if that was your only filter, you would
 get back all the entities). The alternative approach would of course be to
 return no entities if the filter is on an attribute we don’t recognize (or
 even issue a validation or bad request exception).  Again, the question is
 whether we have agreement across the projects for how such unrecognized
 filtering should be handled?

 The closest thing we have is the Filtering guideline [2] but it doesn’t
 account for this particular case.

 Client tool developers would be quite frustrated by a service ignoring
 filters it doesn’t understand or returning no entities if the filter isn’t
 recognized. In both cases, the developer isn’t getting the expected result
 but you’re masking the error made by the developer.

 Much better to return a 400 so the problem can be fixed immediately.
 Somewhat related is this draft [3].

 Everett

 [1] http://specs.openstack.org/openstack/api-wg/
 [2]
 http://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html#filtering
 [3] https://tools.ietf.org/html/draft-thomson-postel-was-wrong-00


There are two different things being talked about here (I think).

1) The unknown filter as Everett has outlined would be something like
'filter_type' as a regex and the service has no idea what that filter type
is. This should be a 400 as the server cannot handle/know how to process
that type of filter.

2) What Henry was discussing is asking for an filter on an entity with a
known filter type (e.g. string match) where the entity doesn't have the
field. An example is the list of IDPs where there is no name field.

Should case 2 be a 400? Or an empty list. You've asked to filter on name,
nothing can match since there is no name.

I'm not opposed to case #2 being an explicit 400 either, just want to
outline the difference clearly here so we are talking the same things.

Feel free to let me know I misread any of the comments so far :)

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Keystone resource naming with domain support - no '::domain' if 'Default'

2015-08-27 Thread Gilles Dubreuil


On 28/08/15 00:53, Rich Megginson wrote:
 On 08/27/2015 07:00 AM, Gilles Dubreuil wrote:

 On 27/08/15 22:40, Gilles Dubreuil wrote:

 On 27/08/15 16:59, Gilles Dubreuil wrote:

 On 26/08/15 06:30, Rich Megginson wrote:
 This concerns the support of the names of domain scoped Keystone
 resources (users, projects, etc.) in puppet.

 At the puppet-openstack meeting today [1] we decided that
 puppet-openstack will support Keystone domain scoped resource names
 without a '::domain' in the name, only if the 'default_domain_id'
 parameter in Keystone has _not_ been set.  That is, if the default
 domain is 'Default'.  In addition:

 * In the OpenStack L release, if 'default_domain_id' is set, puppet
 will
 issue a warning if a name is used without '::domain'.
 The default domain is always set to 'default' unless overridden to
 something else.
 Just to clarify, I don't see any logical difference between the
 default_domain_id to be 'default' or something else.
 
 There is, however, a difference between explicitly setting the value to
 something other than 'default', and not setting it at all.
 
 That is, if a user/operator specifies
 
   keystone_domain { 'someotherdomain':
 is_default = true,
   }
 
 then the user/operation is explicitly telling puppet-keystone that a
 non-default domain is being used, and that the user/operator is aware of
 domains, and will create domain scoped resources with the '::domain' in
 the name.
 

That makes sense.

Let's chase down the default 'default' domain then.


 Per keystone.conf comment (as seen below) the default_domain_id,
 whatever its value, is created as a valid domain.

 # This references the domain to use for all Identity API v2 requests
 (which are not aware of domains). A domain with this ID will be created
 for you by keystone-manage db_sync in migration 008. The domain
 referenced by this ID cannot be deleted on the v3 API, to prevent
 accidentally breaking the v2 API. There is nothing special about this
 domain, other than the fact that it must exist to order to maintain
 support for your v2 clients. (string value)
 #default_domain_id = default

 To be able to test if a 'default_domain_id' is set or not, actually
 translates to checking if the id is 'default' or something else.
 
 Not exactly.  There is a difference between explicitly setting the
 value, and implicitly relying on the default 'default' value.
 
 But I don't see the point here. If a user decides to change default' to
 'This_is_the_domain_id_for_legacy_v2, how does this help?
 
 If the user changes that, then that means the user has also decided to
 explicitly provided '::domain' in all domain scoped resource names.
 

 If that makes sense then I would actually avoid the intermediate stage:

 * In OpenStack L release:
 Puppet will issue a warning if a name is used without '::domain'.

 * From Openstack M release:
 A name must be used with '::domain'.

 * In the OpenStack M release, puppet will issue a warning if a name is
 used without '::domain', even if 'default_domain_id' is not set.
 Therefore the 'default_domain_id' is never 'not set'.

 * In N (or possibly, O), resource names will be required to have
 '::domain'.

 I understand, from Openstack N release and ongoing, the domain would be
 mandatory.

 So I would like to revisit the list:

 * In OpenStack L release:
Puppet will issue a warning if a name is used without '::domain'.

 * In OpenStack M release:
Puppet will issue a warning if a name is used without '::domain'.

 * From Openstack N release:
A name must be used with '::domain'.


 +1

 The current spec [2] and current code [3] try to support names
 without a
 '::domain' in the name, in non-default domains, provided the name is
 unique across _all_ domains.  This will have to be changed in the
 current code and spec.

 Ack

 [1]
 http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-08-25-15.01.html


 [2]
 http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/api-v3-support.html


 [3]
 https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_user/openstack.rb#L217




 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

[openstack-dev] [Neutron] Question about neutron-ovs-cleanup and L3/DHCP agents

2015-08-27 Thread Tidwell, Ryan
I was looking over the admin guide 
http://docs.openstack.org/admin-guide-cloud/networking_config-agents.html#configure-l3-agent
 and noticed this:

If you reboot a node that runs the L3 agent, you must run the 
neutron-ovs-cleanup command before the neutron-l3-agent service starts.

Taking a look at neutron-ovs-cleanup, it appears to remove stray veth pairs and 
tap port in OVS.  The admin guide suggests ensuring neutron-ovs-cleanup runs 
before L3 agent and DHCP agent start when rebooting a node.  My question is 
whether there is something special about a reboot vs. an agent restart that is 
the genesis of this note in the admin guide.  What conditions can get you into 
a state where neutron-ovs-cleanup is required?  Is it just a matter of the OVS 
agent getting out of sync and needing to go back to a clean slate? Can anyone 
shed some light on this note?

-Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Documentation on how to Start Contributing

2015-08-27 Thread Jeremy Stanley
On 2015-08-27 12:14:29 -0700 (-0700), Vahid S Hashemian wrote:
[...]
 I have a development folder for murano under
 /home/stack/workspace/murano. If the changes I make involve
 multiple files I would have to remember each time what files were
 changes to make sure I copy them over under /opt/stack/murano
 before restarting murano-engine daemon. This can easily cause
 confusion and is error-prone.
 
 Do you have an advice on how to better handle this?
[...]

[Please remember to avoid top-posting and appropriately trim quoted
material when posting to technical mailing lists.]

Following an upstream-first philosophy, I find it's easiest to
commit and push my work in progress as proposed changes to
review.openstack.org from my workstation, and then pull those same
changes from Gerrit in a throwaway virtual machine where I test them
in DevStack. Not only does this make it easier to keep track of what
you're changing, it also helps you avoid forgetting to push it all
for review later since you're doing that all along the way instead.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][third-party] StorPool Cinder CI

2015-08-27 Thread Peter Penchev
Hi,

Some time ago I sent
http://lists.openstack.org/pipermail/third-party-announce/2015-August/000261.html
to the third-party-announce list as a reply to a message about the
StorPool Cinder third-party CI being disabled.  Well, as I wrote in my
reply there, I think that we have done what Mike Perez asked us to -
reconfigured our CI to run in silent mode and keep it running for a
while, updating our local patches to deal with the changes in Cinder,
Nova and Brick, and put out the logs so that they're available for
public perusal.  That message contains links to some CI runs as of the
time of its sending, but here are some new ones as of today:

http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3129/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3128/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3127/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3126/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3125/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3124/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3123/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3122/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3121/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3120/

So, would that be enough to get our CI system's Gerrit account
reenabled, as a first step towards putting the StorPool Cinder driver
back in?  Of course, if there's anyhing else we should do, just let us
know.

Once again, thanks to all of you for your work on OpenStack, and
thanks to Anita Kuno, Mike Perez, and everyone else for keeping an eye
on the third-party CI systems!

G'luck,
Peter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-deb] Devstack stable/juno fails to install

2015-08-27 Thread Matt Riedemann



On 8/25/2015 9:15 AM, Matt Riedemann wrote:



On 8/20/2015 6:12 AM, Eduard Matei wrote:

Hi,

ATM our workaround is to manually pip install futures==2.2.0 before
running stack.sh

Any idea when an official fix will be available?

Thanks,
Eduard


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



It's being worked here:

https://bugs.launchpad.net/python-swiftclient/+bug/1486576



This is the request for python-swiftclient 2.3.2:

https://review.openstack.org/#/c/217900/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Brocade CI

2015-08-27 Thread Angela Smith
The full results of lastcomment script are here for last 400 commits: [1][2]

[1] http://paste.openstack.org/show/430074/
[2] http://paste.openstack.org/show/430088/


From: Angela Smith
Sent: Thursday, August 27, 2015 1:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: DL-GRP-ENG-Brocade-Openstack-CI
Subject: RE: [openstack-dev] [cinder] Brocade CI

Mike,
An update on Brocade CI progress.  We are now using the format required for 
results to show in lastcomment script.
We have been consistently reporting for last 9 days.  See results here: [1].
We are still working on resolving recheck issue and adding link to wiki page in 
the failed result comment message.   Update will be sent when that is completed.
Thanks,
Angela

[1] http://paste.openstack.org/show/430074/

From: Angela Smith
Sent: Friday, August 21, 2015 1:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: DL-GRP-ENG-Brocade-Openstack-CI
Subject: RE: [openstack-dev] [cinder] Brocade CI

Mike,
I wanted to update you on our progress on the Brocade CI.
We are currently working on the remaining requirements of adding recheck and 
adding link to wiki page for a failed result.
Also, the CI is now consistently testing and reporting on all cinder reviews 
for the past 3 days.
Thanks,
Angela

From: Nagendra Jaladanki [mailto:nagendra.jalada...@gmail.com]
Sent: Thursday, August 13, 2015 4:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: DL-GRP-ENG-Brocade-Openstack-CI
Subject: Re: [openstack-dev] [cinder] Brocade CI

Ramy,
Thanks for providing the correct message. We will update our commit message 
accordingly.
Thanks,
Nagendra Rao

On Thu, Aug 13, 2015 at 4:43 PM, Asselin, Ramy 
ramy.asse...@hp.commailto:ramy.asse...@hp.com wrote:
Hi Nagendra,

Seems one of the issues is the format of the posted comments. The correct 
format is documented here [1]

Notice the format is not correct:
Incorrect: Brocade Openstack CI (non-voting) build SUCCESS logs at: 
http://144.49.208.28:8000/build_logs/2015-08-13_18-19-19/
Correct: * test-name-no-spaces http://link.to/result : [SUCCESS|FAILURE] some 
comment about the test

Ramy

[1] 
http://docs.openstack.org/infra/system-config/third_party.html#posting-result-to-gerrit

From: Nagendra Jaladanki 
[mailto:nagendra.jalada...@gmail.commailto:nagendra.jalada...@gmail.com]
Sent: Wednesday, August 12, 2015 4:37 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: brocade-openstack...@brocade.commailto:brocade-openstack...@brocade.com
Subject: Re: [openstack-dev] [cinder] Brocade CI

Mike,

Thanks for your feedback and suggestions. I had send my response yesterday but 
looks like didn't get posted on the 
lists.openstack.orghttp://lists.openstack.org. Hence posting it here again.

We reviewed your comments and following issues were identified and some of them 
are fixed and some fix plans in progress:

1) Not posting success or failure
 The Brocade CI is a non-voting CI. The CI is posting the comment for build 
sucucess or failures. The report tool is not seeing these. We are working on 
correcting this.
2) Not posting a result link to view logs.
   We could not find any cases where CI is failed to post the link to logs from 
the generated report.  If you have any specific uses where it failed to post 
logs link, please share with us. But we did see that CI not posted the comment 
at all for some review patch sets. Root causing the issue why CI not posted the 
comment at all.
3) Not consistently doing runs.
   There were planned down times and CI not posted during those periods. We 
also observed that CI was not posting the failures in some cases where CI 
failed due non openstack issues. We corrected this. Now the CI should be 
posting the results for all patch sets either success or failure.
We are also doing the following:
- Enhance the message format to be inline with other CIs.
- Closely monitoring the incoming Jenkin's request vs out going builds and 
correcting if there are any issues.

Once again thanks for your feedback and suggestions. We will continue to post 
this list on the updates.

Thanks  Regards,

Nagendra Rao Jaladanki

Manager, Software Engineering Manageability Brocade

130 Holger Way, San Jose, CA 95134

On Sun, Aug 9, 2015 at 5:34 PM, Mike Perez 
thin...@gmail.commailto:thin...@gmail.com wrote:
People have asked me at the Cinder midcycle sprint to look at the Brocade CI
to:

1) Keep the zone manager driver in Liberty.
2) Consider approving additional specs that we're submitted before the
   deadline.

Here are the current problems with the last 100 runs [1]:

1) Not posting success or failure.
2) Not posting a result link to view logs.
3) Not consistently doing runs. If you compare with other CI's there are plenty
   missing in a day.

This CI does not follow the guidelines [2]. Please get help [3].

[1] - 

Re: [openstack-dev] [cinder] Brocade CI

2015-08-27 Thread Angela Smith
Mike,
An update on Brocade CI progress.  We are now using the format required for 
results to show in lastcomment script.
We have been consistently reporting for last 9 days.  See results here: [1].
We are still working on resolving recheck issue and adding link to wiki page in 
the failed result comment message.   Update will be sent when that is completed.
Thanks,
Angela

[1] http://paste.openstack.org/show/430074/

From: Angela Smith
Sent: Friday, August 21, 2015 1:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: DL-GRP-ENG-Brocade-Openstack-CI
Subject: RE: [openstack-dev] [cinder] Brocade CI

Mike,
I wanted to update you on our progress on the Brocade CI.
We are currently working on the remaining requirements of adding recheck and 
adding link to wiki page for a failed result.
Also, the CI is now consistently testing and reporting on all cinder reviews 
for the past 3 days.
Thanks,
Angela

From: Nagendra Jaladanki [mailto:nagendra.jalada...@gmail.com]
Sent: Thursday, August 13, 2015 4:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: DL-GRP-ENG-Brocade-Openstack-CI
Subject: Re: [openstack-dev] [cinder] Brocade CI

Ramy,
Thanks for providing the correct message. We will update our commit message 
accordingly.
Thanks,
Nagendra Rao

On Thu, Aug 13, 2015 at 4:43 PM, Asselin, Ramy 
ramy.asse...@hp.commailto:ramy.asse...@hp.com wrote:
Hi Nagendra,

Seems one of the issues is the format of the posted comments. The correct 
format is documented here [1]

Notice the format is not correct:
Incorrect: Brocade Openstack CI (non-voting) build SUCCESS logs at: 
http://144.49.208.28:8000/build_logs/2015-08-13_18-19-19/
Correct: * test-name-no-spaces http://link.to/result : [SUCCESS|FAILURE] some 
comment about the test

Ramy

[1] 
http://docs.openstack.org/infra/system-config/third_party.html#posting-result-to-gerrit

From: Nagendra Jaladanki 
[mailto:nagendra.jalada...@gmail.commailto:nagendra.jalada...@gmail.com]
Sent: Wednesday, August 12, 2015 4:37 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: brocade-openstack...@brocade.commailto:brocade-openstack...@brocade.com
Subject: Re: [openstack-dev] [cinder] Brocade CI

Mike,

Thanks for your feedback and suggestions. I had send my response yesterday but 
looks like didn't get posted on the 
lists.openstack.orghttp://lists.openstack.org. Hence posting it here again.

We reviewed your comments and following issues were identified and some of them 
are fixed and some fix plans in progress:

1) Not posting success or failure
 The Brocade CI is a non-voting CI. The CI is posting the comment for build 
sucucess or failures. The report tool is not seeing these. We are working on 
correcting this.
2) Not posting a result link to view logs.
   We could not find any cases where CI is failed to post the link to logs from 
the generated report.  If you have any specific uses where it failed to post 
logs link, please share with us. But we did see that CI not posted the comment 
at all for some review patch sets. Root causing the issue why CI not posted the 
comment at all.
3) Not consistently doing runs.
   There were planned down times and CI not posted during those periods. We 
also observed that CI was not posting the failures in some cases where CI 
failed due non openstack issues. We corrected this. Now the CI should be 
posting the results for all patch sets either success or failure.
We are also doing the following:
- Enhance the message format to be inline with other CIs.
- Closely monitoring the incoming Jenkin's request vs out going builds and 
correcting if there are any issues.

Once again thanks for your feedback and suggestions. We will continue to post 
this list on the updates.

Thanks  Regards,

Nagendra Rao Jaladanki

Manager, Software Engineering Manageability Brocade

130 Holger Way, San Jose, CA 95134

On Sun, Aug 9, 2015 at 5:34 PM, Mike Perez 
thin...@gmail.commailto:thin...@gmail.com wrote:
People have asked me at the Cinder midcycle sprint to look at the Brocade CI
to:

1) Keep the zone manager driver in Liberty.
2) Consider approving additional specs that we're submitted before the
   deadline.

Here are the current problems with the last 100 runs [1]:

1) Not posting success or failure.
2) Not posting a result link to view logs.
3) Not consistently doing runs. If you compare with other CI's there are plenty
   missing in a day.

This CI does not follow the guidelines [2]. Please get help [3].

[1] - http://paste.openstack.org/show/412316/
[2] - 
http://docs.openstack.org/infra/system-config/third_party.html#requirements
[3] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Questions

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [glance][murano] Glance V3 (Artifacts) usage in Murano

2015-08-27 Thread Jeremy Stanley
On 2015-08-26 12:48:23 -0400 (-0400), Nikhil Komawar wrote:
 Can't find the logs on eavesdrop atm. Discussed yesterday on
 #openstack-relmgr-office around UTC evening.

URL: 
http://eavesdrop.openstack.org/irclogs/%23openstack-relmgr-office/%23openstack-relmgr-office.2015-08-24.log.html#t2015-08-24T14:26:34
 

-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] CI for reliable live-migration

2015-08-27 Thread Kraminsky, Arkadiy
Hello,

I'm a new developer on the Openstack project and am in the process of creating 
live migration CI for HP's 3PAR and Lefthand backends. I noticed you guys are 
looking for someone to pick up Joe Gordon's change for volume backed live 
migration tests and we can sure use something like this. I can take a look into 
the change, and see what I can do. :)

Thanks,

Arkadiy Kraminsky

From: Joe Gordon [joe.gord...@gmail.com]
Sent: Wednesday, August 26, 2015 9:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] CI for reliable live-migration



On Wed, Aug 26, 2015 at 8:18 AM, Matt Riedemann 
mrie...@linux.vnet.ibm.commailto:mrie...@linux.vnet.ibm.com wrote:


On 8/26/2015 3:21 AM, Timofei Durakov wrote:
Hello,

Here is the situation: nova has live-migration feature but doesn't have
ci job to cover it by functional tests, only
gate-tempest-dsvm-multinode-full(non-voting, btw), which covers
block-migration only.
The problem here is, that live-migration could be different, depending
on how instance was booted(volume-backed/ephemeral), how environment is
configured(is shared instance directory(NFS, for example), or RBD used
to store ephemeral disk), or for example user don't have that and is
going to use --block-migrate flag. To claim that we have reliable
live-migration in nova, we should check it at least on envs with rbd or
nfs as more popular than envs without shared storages at all.
Here is the steps for that:

 1. make  gate-tempest-dsvm-multinode-full voting, as it looks OK for
block-migration testing purposes;

When we are ready to make multinode voting we should remove the equivalent 
single node job.


If it's been stable for awhile then I'd be OK with making it voting on nova 
changes, I agree it's important to have at least *something* that gates on 
multi-node testing for nova since we seem to break this a few times per release.

Last I checked it isn't as stable is single node yet: 
http://jogo.github.io/gate/multinode [0].  The data going into graphite is a 
bit noisy so this may be a red herring, but at the very least it needs to be 
investigated. When I was last looking into this there were at least two known 
bugs:

https://bugs.launchpad.net/nova/+bug/1445569
https://bugs.launchpad.net/nova/+bug/1445569
https://bugs.launchpad.net/nova/+bug/1462305


[0] 
http://graphite.openstack.org/graph/?from=-36hoursheight=500until=nowwidth=800bgcolor=fffgcolor=00yMax=100yMin=0target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.{SUCCESS,FAILURE})),%275hours%27),%20%27gate-tempest-dsvm-full%27),%27orange%27)target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.{SUCCESS,FAILURE})),%275hours%27),%20%27gate-tempest-dsvm-multinode-full%27),%27brown%27)title=Check%20Failure%20Rates%20(36%20hours)_t=0.48646087432280183http://graphite.openstack.org/graph/?from=-36hoursheight=500until=nowwidth=800bgcolor=fffgcolor=00yMax=100yMin=0target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-f
 
ull.%7BSUCCESS,FAILURE%7D)),%275hours%27),%20%27gate-tempest-dsvm-full%27),%27orange%27)target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.%7BSUCCESS,FAILURE%7D)),%275hours%27),%20%27gate-tempest-dsvm-multinode-full%27),%27brown%27)title=Check%20Failure%20Rates%20(36%20hours)_t=0.48646087432280183


 2. contribute to tempest to cover volume-backed instances live-migration;

jogo has had a patch up for this for awhile:

https://review.openstack.org/#/c/165233/

Since it's not full time on openstack anymore I assume some help there in 
picking up the change would be appreciated.

yes please


 3. make another job with rbd for storing ephemerals, it also requires
changing tempest config;

We already have a voting ceph job for nova - can we turn that into a multi-node 
testing job and run live migration with shared storage using that?

 4. make job with nfs for ephemerals.

Can't we use a multi-node ceph job (#3) for this?


These steps should help us to improve current situation with
live-migration.

--
Timofey.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Thanks,

Matt Riedemann


__

Re: [openstack-dev] [Murano] Documentation on how to Start Contributing

2015-08-27 Thread Jeremy Stanley
On 2015-08-27 13:55:16 -0700 (-0700), Vahid S Hashemian wrote:
 Thank you for your response. What you suggested makes sense. Could
 you please also confirm
 
 - Whether you push your work in progress to review.openstack.org
 and go to the site and manually mark it as work in progress (so
 reviewers don't assume it's ready for review)?

Yes, I do (well, I actually use https://pypi.python.org/pypi/gertty
to set a workflow -1 vote rather than doing it through the Gerrit
WebUI because working in a console is more comfortable for me).

 - When you pull the change on your test server, do you pull
 directly into /opt/stack/murano and restart the daemon?

I haven't worked on Murano specifically, but for other services I
work on testing patches for under DevStack I fetch them from
review.openstack.org and check them out (or sometimes cherry-pick
them depending on the situation) directly in the repos under the
/opt/stack directory and then break and re-run the affected services
in their respective screen sessions.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-deb] Devstack stable/juno fails to install

2015-08-27 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2015-08-27 15:50:13 -0500:
 
 On 8/25/2015 9:15 AM, Matt Riedemann wrote:
 
 
  On 8/20/2015 6:12 AM, Eduard Matei wrote:
  Hi,
 
  ATM our workaround is to manually pip install futures==2.2.0 before
  running stack.sh
 
  Any idea when an official fix will be available?
 
  Thanks,
  Eduard
 
 
  __
 
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  It's being worked here:
 
  https://bugs.launchpad.net/python-swiftclient/+bug/1486576
 
 
 This is the request for python-swiftclient 2.3.2:
 
 https://review.openstack.org/#/c/217900/
 

I just completed that release, so if it's not already built it will be
shortly.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][third-party] StorPool Cinder CI

2015-08-27 Thread Peter Penchev
On Fri, Aug 28, 2015 at 12:22 AM, Asselin, Ramy ramy.asse...@hp.com wrote:
 Hi Peter,

 Your log files require downloads. Please fix it such that they can be viewed 
 directly [1]

Hi, and thanks for the fast reply!  Yes, I'll try to change the
webserver's configuration, although the snippet in the FAQ won''t help
a lot, since it's a lighttpd server, not Apache.  I'll get back to you
when I've figured something out.

 Also, it's not clear where in your scripts you actually pull down the cinder 
 patch.

It's in the pre_test_hook, the for loop that rebases cinder, nova,
os-brick, etc. onto the storpool-osci branch of our local repo:

2015-08-27 20:18:15.857 | + echo '=== Fetch the StorPool modifications:'
2015-08-27 20:18:15.858 | === Fetch the StorPool modifications:
2015-08-27 20:18:15.859 | + for i in cinder devstack devstack-gate
nova os-brick tempest
2015-08-27 20:18:15.860 | + echo '=== - cinder'
2015-08-27 20:18:15.861 | === - cinder
2015-08-27 20:18:15.862 | + cd /opt/stack/new/cinder
2015-08-27 20:18:15.863 | + git remote add osci
git+ssh://logarchive@osci-jenkins-master/var/lib/osci/git/cinder/
2015-08-27 20:18:15.864 | + git fetch osci
2015-08-27 20:18:16.033 | From
git+ssh://osci-jenkins-master/var/lib/osci/git/cinder
2015-08-27 20:18:16.034 |  * [new branch]  master - osci/master
2015-08-27 20:18:16.036 |  * [new branch]  storpool-osci -
osci/storpool-osci
[snip]
2015-08-27 20:18:16.046 | + git rebase osci/storpool-osci
2015-08-27 20:18:16.090 | First, rewinding head to replay your work on
top of it...
2015-08-27 20:18:16.155 | Applying: Port test_nfs to Python 3
2015-08-27 20:18:16.185 | Applying: Implement function to
manage/unmanage snapshots

...and then proceeds to do the same for the rest of the OpenStack
projects listed on the third line.

G'luck,
Peter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Documentation on how to Start Contributing

2015-08-27 Thread Vahid S Hashemian
Hi Jeremy,

Thank you for your response. What you suggested makes sense. Could you 
please also confirm

- Whether you push your work in progress to review.openstack.org and go to 
the site and manually mark it as work in progress (so reviewers don't 
assume it's ready for review)?
- When you pull the change on your test server, do you pull directly into 
/opt/stack/murano and restart the daemon?

Thanks.

--Vahid Hashemian__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][third-party] StorPool Cinder CI

2015-08-27 Thread Asselin, Ramy
Hi Peter,

Your log files require downloads. Please fix it such that they can be viewed 
directly [1]

Also, it's not clear where in your scripts you actually pull down the cinder 
patch.

Ramy

[1] 
http://docs.openstack.org/infra/system-config/third_party.html#faq-frequently-asked-questions


-Original Message-
From: Peter Penchev [mailto:openstack-...@storpool.com] 
Sent: Thursday, August 27, 2015 1:43 PM
To: OpenStack openstack-dev@lists.openstack.org
Subject: [openstack-dev] [cinder][third-party] StorPool Cinder CI

Hi,

Some time ago I sent
http://lists.openstack.org/pipermail/third-party-announce/2015-August/000261.html
to the third-party-announce list as a reply to a message about the StorPool 
Cinder third-party CI being disabled.  Well, as I wrote in my reply there, I 
think that we have done what Mike Perez asked us to - reconfigured our CI to 
run in silent mode and keep it running for a while, updating our local patches 
to deal with the changes in Cinder, Nova and Brick, and put out the logs so 
that they're available for public perusal.  That message contains links to some 
CI runs as of the time of its sending, but here are some new ones as of today:

http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3129/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3128/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3127/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3126/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3125/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3124/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3123/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3122/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3121/
http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3120/

So, would that be enough to get our CI system's Gerrit account reenabled, as a 
first step towards putting the StorPool Cinder driver back in?  Of course, if 
there's anyhing else we should do, just let us know.

Once again, thanks to all of you for your work on OpenStack, and thanks to 
Anita Kuno, Mike Perez, and everyone else for keeping an eye on the third-party 
CI systems!

G'luck,
Peter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][murano] Glance V3 (Artifacts) usage in Murano

2015-08-27 Thread Nikhil Komawar
Thanks Jeremy!

On 8/27/15 5:10 PM, Jeremy Stanley wrote:
 On 2015-08-26 12:48:23 -0400 (-0400), Nikhil Komawar wrote:
 Can't find the logs on eavesdrop atm. Discussed yesterday on
 #openstack-relmgr-office around UTC evening.
 URL: 
 http://eavesdrop.openstack.org/irclogs/%23openstack-relmgr-office/%23openstack-relmgr-office.2015-08-24.log.html#t2015-08-24T14:26:34
  


-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Help with stable/juno branches / releases

2015-08-27 Thread Robert Collins
On 27 August 2015 at 10:32, Tony Breeds t...@bakeyournoodle.com wrote:

 No problem.  It seemed like such a simple thing :/

Hah. No :/. Its at the very core of the issues we've had with distros
packaging Kilo, and we had with the opening of Liberty, and the rework
of the plumbing here I've been leading for the last cycle - and that
we're nearly done with the first pass. At which point we can look at
new features ;).

 Right now I need 3 releases for oslo packages and then releases for at least 5
 other projects from stable/juno (and that after I get the various reviews
 closed out) and it's quite possible that these releases will in turn generate
 more.

 I had to admit I'm questioning if it's worth it.  Not because I think it's too
 hard but it is sunstantial effort to put into juno which is (in theory) going
 to be EOL'd in 6 - 10 weeks.

I'm pretty sure it *will* be EOL'd. OTOH thats 10 weeks of fixes folk
can get. I think you should do it if you've the stomach for it, and if
its going to help someone. I can aid by cutting library releases for
you I think (haven't checked stable releases yet, and I need to update
myself on the tooling changes in the last fortnight...).

 I feel bad for asking that question as I've pulled in favors and people have
 agreed to $things that they're not entirely comfortable with so we can fix
 this.

 Is it worth discussing this at next weeks cross-project meeting?

Maybe.

Kilo is going to be an issue for another cycle, and then after that
this should be SO MUCH BETTER.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] convergence rally test results (so far)

2015-08-27 Thread Angus Salkeld
Hi

I have been running some rally tests against convergence and our existing
implementation to compare.

So far I have done the following:

   1. defined a template with a resource group
   
https://github.com/asalkeld/convergence-rally/blob/master/templates/resource_group_test_resource.yaml.template
   2. the inner resource looks like this:
   
https://github.com/asalkeld/convergence-rally/blob/master/templates/server_with_volume.yaml.template
(it
   uses TestResource to attempt to be a reasonable simulation of a
   server+volume+floatingip)
   3. defined a rally job:
   
https://github.com/asalkeld/convergence-rally/blob/master/increasing_resources.yaml
that
   creates X resources then updates to X*2 then deletes.
   4. I then ran the above with/without convergence and with 2,4,8
   heat-engines

Here are the results compared:
https://docs.google.com/spreadsheets/d/12kRtPsmZBl_y78aw684PTBg3op1ftUYsAEqXBtT800A/edit?usp=sharing

Some notes on the results so far:

   -  convergence with only 2 engines does suffer from RPC overload (it
   gets message timeouts on larger templates). I wonder if this is the problem
   in our convergence gate...
   - convergence does very well with a reasonable number of engines running.
   - delete is slightly slower on convergence


Still to test:

   - the above, but measure memory usage
   - many small templates (run concurrently)
   - we need to ask projects using Heat to try with convergence (Murano,
   TripleO, Magnum, Sahara, etc..)

Any feedback welcome (suggestions on what else to test).

-Angus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3][dvr][fwaas] FWaaS

2015-08-27 Thread Sean M. Collins
Do you have a known good commit for the FwaaS repo? Or Neutron? Perhaps you can 
run a git-bisect to find the commit that introduced. Labor intensive, but I did 
a little digging in FwaaS and didn't see anything that was obvious.
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][keystone][openstackclient] Standards for object name attributes and filtering

2015-08-27 Thread Chen, Wei D
 On Aug 26, 2015, at 4:45 AM, Henry Nash hen...@linux.vnet.ibm.com wrote:
 
  Hi
 
  With keystone, we recently came across an issue in terms of the assumptions 
  that the openstack client is making about the
 entities it can show - namely that is assumes all entries have a 'name' 
 attribute (which is how the openstack show
 command works). Turns out, that not all keystone entities have such an 
 attribute (e.g. IDPs for federation) - often the ID is
 really the name. Is there already agreement across our APIs that all first 
 class entities should have a 'name' attribute?  If
 we do, then we need to change keystone, if not, then we need to change 
 openstack client to not make this assumption (and
 perhaps allow some kind of per-entity definition of which attribute should be 
 used for 'show').
 
 AFAICT, there's no such agreement in the API WG guidelines [1].
 
  A follow on (and somewhat related) question to this, is whether we have 
  agreed standards for what should happen if some
 provides an unrecognized filter to a list entities API request at the http 
 level (this is related since this is also the hole osc
fell
 into with keystone since, again, 'name' is not a recognized filter 
 attribute). Currently keystone ignores filters it doesn't
 understand (so if that was your only filter, you would get back all the 
 entities). The alternative approach would of course be
 to return no entities if the filter is on an attribute we don't recognize (or 
 even issue a validation or bad request exception).
 Again, the question is whether we have agreement across the projects for how 
 such unrecognized filtering should be
 handled?
 
 The closest thing we have is the Filtering guideline [2] but it doesn't 
 account for this particular case.
 
 Client tool developers would be quite frustrated by a service ignoring 
 filters it doesn't understand or returning no entities if
 the filter isn't recognized. In both cases, the developer isn't getting the 
 expected result but you're masking the error made
 by the developer.
 
 Much better to return a 400 so the problem can be fixed immediately. Somewhat 
 related is this draft [3].

I think Henry's point is return everything or return nothing when there is a 
filter doesn't recognized by the server, let' s see the
sample object in the spec [1],
when there is a query like this, GET /app/items?bugus=buzz, will it return all 
the two items we have? I think this confuse the
client developers a lot and give the
user wrong indication, and the wrong direction based on the response, this 
seems to me more like a bug or something we need
improvement. 

I agree that return 400 is good idea, thus client user would know what happened.

Best Regards,
Dave Chen

 Everett
 
 [1] http://specs.openstack.org/openstack/api-wg/
 [2] 
 http://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html#filtering
 [3] https://tools.ietf.org/html/draft-thomson-postel-was-wrong-00
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][keystone][openstackclient] Standards for object name attributes and filtering

2015-08-27 Thread Chen, Wei D
 Hi
 
 With keystone, we recently came across an issue in terms of the assumptions 
 that the openstack client is making about the
 entities it can show - namely that is assumes all entries have a ‘name’ 
 attribute (which is how the openstack show
 command works). Turns out, that not all keystone entities have such an 
 attribute (e.g. IDPs for federation) - often the ID is
 really the name. Is there already agreement across our APIs that all first 
 class entities should have a ‘name’ attribute?  If
 we do, then we need to change keystone, if not, then we need to change 
 openstack client to not make this assumption (and
 perhaps allow some kind of per-entity definition of which attribute should be 
 used for ‘show’).
 

I think OSC do this assumption based on that there is no need to query by the 
ID.
'openstack show' try to get the IDP by following,
curl -s -X GET 
http://127.0.0.1:35357/v3/OS-FEDERATION/identity_providers/notexsitingIDP -H 
Content-Type: application/json -H Accept: application/json -H 
X-Auth-Token: 05e74f9448124aaba339cd809fd7b219

Then fail back to filter by the 'name'. In this case, if we allow the 
per-entity definition, we may tried it again with the query
like this, 
curl GET 
http://127.0.0.1:35357/v3/OS-FEDERATION/identity_providers?id=notexsitingIDP

but this is not necessary since we have tried it with the ID, why we still 
tried it again with different API? the both APIs *should* has the same response 
instead of 
one get nothing and another get everything, this is not make sense. If there 
is, this is a bug of the server IMO.


 A follow on (and somewhat related) question to this, is whether we have 
 agreed standards for what should happen if some
 provides an unrecognized filter to a list entities API request at the http 
 level (this is related since this is also the hole osc fell
 into with keystone since, again, ‘name’ is not a recognized filter 
 attribute). Currently keystone ignores filters it doesn’t
 understand (so if that was your only filter, you would get back all the 
 entities). The alternative approach would of course be
 to return no entities if the filter is on an attribute we don’t recognize (or 
 even issue a validation or bad request exception).
 Again, the question is whether we have agreement across the projects for how 
 such unrecognized filtering should be
 handled?
 
 Thanks
 
 Henry
 Keystone Core
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][keystone][openstackclient] Standards for object name attributes and filtering

2015-08-27 Thread Chen, Wei D

  Hi
 
  With keystone, we recently came across an issue in terms of the assumptions 
  that the openstack client is making about the
  entities it can show - namely that is assumes all entries have a ‘name’ 
  attribute (which is how the openstack show
  command works). Turns out, that not all keystone entities have such an 
  attribute (e.g. IDPs for federation) - often the ID is
  really the name. Is there already agreement across our APIs that all first 
  class entities should have a ‘name’ attribute?
 If
  we do, then we need to change keystone, if not, then we need to change 
  openstack client to not make this assumption (and
  perhaps allow some kind of per-entity definition of which attribute should 
  be used for ‘show’).
 
 
 I think OSC do this assumption based on that there is no need to query by the 
 ID.
 'openstack show' try to get the IDP by following,
 curl -s -X GET 
 http://127.0.0.1:35357/v3/OS-FEDERATION/identity_providers/notexsitingIDP -H 
 Content-Type:
 application/json -H Accept: application/json -H X-Auth-Token: 
 05e74f9448124aaba339cd809fd7b219
 
 Then fail back to filter by the 'name'. In this case, if we allow the 
 per-entity definition, we may tried it again with the query
 like this,
 curl GET 
 http://127.0.0.1:35357/v3/OS-FEDERATION/identity_providers?id=notexsitingIDP
 
 but this is not necessary since we have tried it with the ID, why we still 
 tried it again with different API? the both APIs
 *should* has the same response instead of
 one get nothing and another get everything, this is not make sense. If there 
 is, this is a bug of the server IMO.
 
correct myself, both APIs will return nothing if allow per-entity definition in 
the OSC, but ID is tried two times.

Best Regards,
Dave Chen 
 
  A follow on (and somewhat related) question to this, is whether we have 
  agreed standards for what should happen if some
  provides an unrecognized filter to a list entities API request at the http 
  level (this is related since this is also the hole osc fell
  into with keystone since, again, ‘name’ is not a recognized filter 
  attribute). Currently keystone ignores filters it doesn’t
  understand (so if that was your only filter, you would get back all the 
  entities). The alternative approach would of course be
  to return no entities if the filter is on an attribute we don’t recognize 
  (or even issue a validation or bad request exception).
  Again, the question is whether we have agreement across the projects for 
  how such unrecognized filtering should be
  handled?
 
  Thanks
 
  Henry
  Keystone Core
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Millions of packets going to a single flow?

2015-08-27 Thread Gabe Black
Hi,
I've been running against openstack-dev (master branch) using the 
stackforge/networking-ovs-dpdk master branch (OVS GIT TAG 
1e77bbe565bbf5ae7f4c47f481a4097d666d3d68), using the single-node local.conf 
file on Ubuntu 15.04.  I've had to patch a few things to get past ERRORs during 
start:
- disable/purge apparmor
- patch ovs-dpdk-init to find correct qemu group and launch ovs-dpdk with sg
- patch ovs_dvr_neutron_agent.py to change default datapath_type to be netdev
- modify ml2_conf.ini to have [ovs] datapath_type=netdev
- create a symlink between /usr/var/run/openvsitch and /var/run/openvswitch

Everything appears to be working from a horizon point of view, I can launch 
VMs, create routers/networks, etc.

However, I've tried to figure out how to get two vms using ovs-dpdk 
(ovs-vswitchd --dpdk ...) to be able to ping eachother (or do anything network 
related for that matter - like get an ipv4 address (via dhcp), ping the 
gateway/router, etc), but to no avail.

I'm wondering if there is a flow that is bogus as when I dump the flows:

# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=5234.334s, table=0, n_packets=0, n_bytes=0, 
idle_age=5234, priority=3,in_port=2,dl_vlan=2001 actions=mod_vlan_vid:1,NORMAL  
cookie=0x0, duration=5246.680s, table=0, n_packets=0, n_bytes=0, idle_age=5246, 
priority=2,in_port=1 actions=drop  cookie=0x0, duration=5246.613s, table=0, 
n_packets=0, n_bytes=0, idle_age=5246, priority=2,in_port=2 actions=drop  
cookie=0x0, duration=5246.744s, table=0, n_packets=46828985, 
n_bytes=2809779780, idle_age=0, priority=0 actions=NORMAL  cookie=0x0, 
duration=5246.740s, table=23, n_packets=0, n_bytes=0, idle_age=5246, priority=0 
actions=drop  cookie=0x0, duration=5246.738s, table=24, n_packets=0, n_bytes=0, 
idle_age=5246, priority=0 actions=drop

There is only one flow that ever gets any packets, and it gets millions of them 
apparently.  Viewing the number of packets sent on an interface (via ifconfig) 
doesn't have any interfaces with near that many packets.

Dumping ports doesn't show any stats near those values either:
===
ovs-ofctl dump-ports br-int
OFPST_PORT reply (xid=0x2): 7 ports
  port  6: rx pkts=0, bytes=?, drop=?, errs=?, frame=?, over=?, crc=?
   tx pkts=636, bytes=?, drop=0, errs=?, coll=?
  port  4: rx pkts=?, bytes=?, drop=?, errs=?, frame=?, over=?, crc=?
   tx pkts=?, bytes=?, drop=?, errs=?, coll=?
  port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
   tx pkts=874, bytes=98998, drop=0, errs=0, coll=0
  port  1: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
   tx pkts=874, bytes=95570, drop=0, errs=0, coll=0
  port  5: rx pkts=?, bytes=?, drop=?, errs=?, frame=?, over=?, crc=?
   tx pkts=?, bytes=?, drop=?, errs=?, coll=?
  port  2: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
   tx pkts=874, bytes=95570, drop=0, errs=0, coll=0
  port  3: rx pkts=?, bytes=?, drop=?, errs=?, frame=?, over=?, crc=?
   tx pkts=?, bytes=?, drop=?, errs=?, coll=?
===

One thing I find odd is that showing the br-int bridge (and all the other 
bridges for that matter) seem to show the port_down for almost all the 
interfaces:
===
ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:9ab69b50904d n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src 
mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
 1(int-br-em1): addr:0a:41:20:a8:6b:50
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 2(int-br-p6p1): addr:2a:5d:62:23:0a:60
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 3(tapa32182b8-ee): addr:00:00:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 speed: 0 Mbps now, 0 Mbps max
 4(qr-a510b75f-f7): addr:00:00:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 speed: 0 Mbps now, 0 Mbps max
 5(qr-d2f1d4a0-a9): addr:00:00:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 speed: 0 Mbps now, 0 Mbps max
 6(vhucf0a0213-68): addr:00:00:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 speed: 0 Mbps now, 0 Mbps max
 LOCAL(br-int): addr:9a:b6:9b:50:90:4d
 config: PORT_DOWN
 state:  LINK_DOWN
 current:    10MB-FD COPPER
 speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0 
===

In fact, trying to bring them up (i.e. ovs-ofctl mod-port br-int 3 up) does not 
change anything for any of 

[openstack-dev] [ironic] final release in Liberty cycle

2015-08-27 Thread Ruby Loo
Hi,

As (most of) you are aware, in this cycle, ironic decided to switch to a
feature-based release model[0].

Our first semver release, 4.0.0, was tagged this week but a few more things
need to be ironed out still (hopefully there will be an announcement about
that in the near future).

What I wanted to mention is that according to the new process, there will
be a final release of ironic that coincides with the Liberty coordinated
release. The current plan is to cut a 4.1.0 release around Liberty RC1,
which will become our stable/liberty branch. According to the schedule[1],
that would most likely happen the week of September 21 or thereabouts.
We'll have a better idea as we get closer to the date.

It isn't clear to me how ironic is affected by the DepFreeze[2] and the
global requirements. Maybe someone who understands that part, could
explain. (And perhaps how the new ironic-lib fits into this freeze, or not.)

--ruby

[0]
http://specs.openstack.org/openstack/ironic-specs/specs/liberty-implemented/feature-based-releases.html
[1] https://wiki.openstack.org/wiki/Liberty_Release_Schedule
[2] https://wiki.openstack.org/wiki/DepFreeze
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][manila] latest microversion considered dangerous

2015-08-27 Thread Matt Riedemann



On 8/27/2015 2:38 PM, Ben Swartzlander wrote:

Manila recently implemented microversions, copying the implementation
from Nova. I really like the feature! However I noticed that it's legal
for clients to transmit latest instead of a real version number.

THIS IS A TERRIBLE IDEA!

I recommend removing support for latest and forcing clients to request
a specific version (or accept the default).

Allowing clients to request the latest microversion guarantees
undefined (and likely broken) behavior* in every situation where a
client talks to a server that is newer than it.

Every client can only understand past and present API implementation,
not future implementations. Transmitting latest implies an assumption
that the future is not so different from the present. This assumption
about future behavior is precisely what we don't want clients to make,
because it prevents forward progress. One of the main reasons
microversions is a valuable feature is because it allows forward
progress by letting us make major changes without breaking old clients.

If clients are allowed to assume that nothing will change too much in
the future (which is what asking for latest implies) then the server
will be right back in the situation it was trying to get out of -- it
can never change any API in a way that might break old clients.

I can think of no situation where transmitting latest is better than
transmitting the highest version that existed at the time the client was
written.

-Ben Swartzlander

* Undefined/broken behavior unless the server restricts itself to never
making any backward-compatiblity-breaking change of any kind.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Makes sense to me.

I see we note that you can do this in the nova devref but there isn't 
any warning about using it:


http://docs.openstack.org/developer/nova/api_microversion_dev.html?highlight=latest#

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Recall: [api][keystone][openstackclient] Standards for object name attributes and filtering

2015-08-27 Thread Chen, Wei D
Chen, Wei D would like to recall the message, [openstack-dev] 
[api][keystone][openstackclient] Standards for   object name attributes and 
filtering.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Help with stable/juno branches / releases

2015-08-27 Thread Tony Breeds
On Fri, Aug 28, 2015 at 11:12:43AM +1200, Robert Collins wrote:

 I'm pretty sure it *will* be EOL'd. OTOH thats 10 weeks of fixes folk
 can get. I think you should do it if you've the stomach for it, and if
 its going to help someone. I can aid by cutting library releases for
 you I think (haven't checked stable releases yet, and I need to update
 myself on the tooling changes in the last fortnight...).

Okay I certainly have the stomach for it in some perverse way it's fun, as I'm
learning about parts of the process / code base that are new to me :)

My concerns were mostly around other peoples time (like the PTLs/cores I need
to hassle and thems with the release power :))

So I'll keep going on it.

Right now there aren't any library releases to be done.

Thanks Robert.

Yours Tony.


pgpxralnYZzpE.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Hope Server Count API can land in Mitaka

2015-08-27 Thread Rui Chen
hi folks:

When we use paginated queries to retrieve instances, we can't get the
total count of instances in current list-servers API.
The count of the querying result is important for operators, Think about a
case, the operators want to know how many 'error' instances
in current deployment in order to make a plan to handle these instances
according to the total count. If the querying page limit is 100,
they have no idea about the count of 'error' instances when they view the
first page, how many instances in the subsequent pages, 101 or 1000?

I found this blueprint Server Count API [1], looks like it can solve
my question, so I would like to see it land in Mitaka release.
But the spec [2] has not been updated since May, somebody still work on
this? I can help to push this feature if need.


[1]: https://blueprints.launchpad.net/nova/+spec/server-count-api
[2]: https://review.openstack.org/#/c/134279/


Best Regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2015-08-27 Thread Vikas Choudhary
Hi Stanislaw,

I also faced similar issue.Reason might be that from inside master
instance openstack heat service is not reachable.
Please check /var/log/cloud-init-log  for any connectivity related
error message and if found try manually whichever command has failed
with correct  url.


If this is the issue, you need to set correct HOST_IP in localrc.


-Vikas Choudhary



___
Hi Stanislaw,

Your host with Fedora should have special config file, which will send
signal to WaitCondition.
For good example please take a look this template
 
https://github.com/openstack/heat-templates/blob/819a9a3fc9d6f449129c8cefa5e087569340109b/hot/native_waitcondition.yaml
https://github.com/openstack/heat-templates/blob/819a9a3fc9d6f449129c8cefa5e087569340109b/hot/native_waitcondition.yaml

Also the best place for such question I suppose will
behttps://ask.openstack.org/en/questions/
https://ask.openstack.org/en/questions/

Regards,
Sergey.

On 26 August 2015 at 09:23, Pitucha, Stanislaw Izaak
stanislaw.pitucha at hp.com
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
wrote:

* Hi all,
** I’m trying to stand up magnum according to the quickstart instructions
** with devstack.
** There’s one resource which times out and fails: master_wait_condition. The
** kube master (fedora) host seems to be created, I can login to it via ssh,
** other resources are created successfully.
** What can I do from here? How do I debug this? I tried to look for the
** wc_notify itself to try manually, but I can’t even find that script.
** Best Regards,
** Stanisław Pitucha
** 
__
** OpenStack Development Mailing List (not for usage questions)
** Unsubscribe: OpenStack-dev-request at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev?subject:unsubscribe
** http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3][dvr][fwaas] FWaaS

2015-08-27 Thread bharath

Hi ,

Adding more info

create_firewall(self, agent_mode, apply_list, firewall) and 
update_firewall(self, agent_mode, apply_list, firewall) api's are 
getting called with empty apply list


apply_list is generated by the _get_router_info_list_for_tenant. The 
rootcause for returning empty list is due to empty self.router_info.



If kill the firewall agent and start the firewall agent again ,then 
router_info is getting updated with existing router , then 
update_firewall is getting called with
non empty apply list so firewall rules are getting applied to existing 
routers. But new firewall updates are still not getting updated as the 
apply_list is empty.


So basically to apply the firewall rules i am ending up to restart the 
firewall_agent repeatedly



The agent which i am using vyatta firewall 
agent(neutron_fwaas/services/firewall/agents/vyatta)


I checked other agents code , the implementation is almost same in all 
the agents.


It seems to be recent breakage as this was working fine in the last month.

i suspect recent changes in neutron or neutron-fwaas might have broken 
this.


Can someone help me out on this issue

Thanks,
bharath



On Thursday 27 August 2015 09:26 PM, bharath wrote:

Hi,

 while testing the fwaas , i found router_info is not getting updated. 
list awlays seems to be empty and getting updated only after the 
restart of fw agent.


This issue resulting empty list while calling 
_get_router_info_list_for_tenant.


i can see some comments as *for routers without an interface - 
get_routers returns the router - but this is not yet populated in 
router_info*
but in my case  even though routers have an interface still the 
router_info is empty.


It seems to be recent breakage as this was working fine in the last month.


Thanks,
bharath


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [javascript] [eslint-config-openstack]

2015-08-27 Thread Michael Krotscheck
TL/DR: Do you have an opinion on language style? Of course you do! Come
weigh in on javascript style rules!

https://review.openstack.org/#/q/status:open+project:openstack/eslint-config-openstack,n,z
(List will continually update as we add more patches)


The original introduction of the eslint-config-openstack project was to
create a javascript equivalent to 'hacking'. When created, we disabled many
of the default rules, because we didn't want to overwhelm horizon with new
rules while they were still in the middle of their JSCS cleanup.

Well, now their build finally passes, and their eslint job has the vote.
Also, in the meantime, eslint-config-openstack has been adopted by merlin,
ironic-webclient, and refstack, and is likely to be adopted by more pure
javascript projects in the future.

It's time to revisit all the rules that we deactivated, and discuss which
of them make sense for OpenStack. We'll release these in small batches, so
that each project won't be overwhelmed by them.

If you need to come up for air from the pre-feature-freeze crazyness, come
on and offer your two cents on some language style rules!

Michael
We are not responsible for broken builds. The only reason this would be a
problem is if you used fuzzy dependency versions, and you shouldn't be
using those in the first place :P
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] How to add vendor specific db tables in neutron

2015-08-27 Thread Anna Kamyshnikova
Hi!

Probably in your vendor repo is missing change that will allow
neutron-db-manage to find the alembic migrations

automatically if this project is installed. See examples of such
changes in networking-cisco [1] and vmware-nsx [2].


[1] - https://review.openstack.org/214403

[2] - https://review.openstack.org/214413


On Thu, Aug 27, 2015 at 11:36 AM, bharath bhar...@brocade.com wrote:

 Hi ,


 I need to add a vendor specific db tables in neutron but vendor specific
 are no more allowed in the neutron. Tables need to be added to vendor
 repo itself.
 So i created alembic versioning in vendor repo. and added new tables
 under vendor repo.
 But i am not seeing tables getting created while stacking the devstack.

 So how to trigger the tables creation in vendor repo?


 Thanks,
 bharath




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Re: New API for node create, specifying initial provision state

2015-08-27 Thread Lucas Alvares Gomes
On Wed, Aug 26, 2015 at 11:09 PM, Julia Kreger
juliaashleykre...@gmail.com wrote:
 My apologies for not expressing my thoughts on this matter
 sooner, however I've had to spend some time collecting my
 thoughts.

 To me, it seems like we do not trust our users.  Granted,
 when I say users, I mean administrators who likely know more
 about the disposition and capabilities of their fleet than
 could ever be discovered or inferred via software.

 Sure, we have other users, mainly in the form of consumers,
 asking Ironic for hardware to be deployed, but the driver for
 adoption is who feels the least amount of pain.

 API versioning aside, I have to ask the community, what is
 more important?

 - An inflexible workflow that forces an administrator to
 always have a green field, and to step through a workflow
 that we've dictated, which may not apply to their operational
 scenario, ultimately driving them to write custom code to
 inject new nodes into the database directly, which will
 surely break from time to time, causing them to hate Ironic
 and look for a different solution.

 - A happy administrator that has the capabilities to do their
 job (and thus manage the baremetal node wherever it is in the
 operator's lifecycle) in an efficient fashion, thus causing
 them to fall in love with Ironic.


I'm sorry, I find the language used in this reply very offensive.
That's not even a real question, due the alternatives you're basically
asking the community What's more important, be happy or be sad ? Be
efficient or not efficient?

It's not about an inflexible workflow which dictates what people
do making them hate the project. It's about finding a common pattern
for an work flow that will work for all types of machines, it's about
consistency, it's about keeping the history of what happened to that
node. When a node is on a specific state you know what it's been
through so you can easily debug it (i.e an ACTIVE node means that it
passed through MANAGEABLE - CLEAN* - AVAILABLE - DEPLOY* - ACTIVE.
Even if some of the states are non-op for a given driver, it's a clear
path).

Think about our API, it's not that we don't allow vendors to add every
new features they have to the core part of the API because we don't
trust them or we think that their shiny features are not worthy. We
don't do that to make it consistent, to have an abstraction layer that
will work the same for all types of hardware.

I mean it when I said I want to have a fresh mind to read the proposal
this new work flow. But I rather read a technical explanation than an
emotional one. What I want to know for example is what it will look
like when one register a node in ACTIVE state directly? What about the
internal driver fields? What about the TFTP/HTTP environment that is
built as part of the DEPLOY process ? What about the ports in Neutron
? and so on...

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Re: New API for node create, specifying initial provision state

2015-08-27 Thread Dmitry Tantsur

On 08/27/2015 11:40 AM, Lucas Alvares Gomes wrote:

On Wed, Aug 26, 2015 at 11:09 PM, Julia Kreger
juliaashleykre...@gmail.com wrote:

My apologies for not expressing my thoughts on this matter
sooner, however I've had to spend some time collecting my
thoughts.

To me, it seems like we do not trust our users.  Granted,
when I say users, I mean administrators who likely know more
about the disposition and capabilities of their fleet than
could ever be discovered or inferred via software.

Sure, we have other users, mainly in the form of consumers,
asking Ironic for hardware to be deployed, but the driver for
adoption is who feels the least amount of pain.

API versioning aside, I have to ask the community, what is
more important?

- An inflexible workflow that forces an administrator to
always have a green field, and to step through a workflow
that we've dictated, which may not apply to their operational
scenario, ultimately driving them to write custom code to
inject new nodes into the database directly, which will
surely break from time to time, causing them to hate Ironic
and look for a different solution.

- A happy administrator that has the capabilities to do their
job (and thus manage the baremetal node wherever it is in the
operator's lifecycle) in an efficient fashion, thus causing
them to fall in love with Ironic.



I'm sorry, I find the language used in this reply very offensive.
That's not even a real question, due the alternatives you're basically
asking the community What's more important, be happy or be sad ? Be
efficient or not efficient?

It's not about an inflexible workflow which dictates what people
do making them hate the project. It's about finding a common pattern
for an work flow that will work for all types of machines, it's about
consistency, it's about keeping the history of what happened to that
node. When a node is on a specific state you know what it's been
through so you can easily debug it (i.e an ACTIVE node means that it
passed through MANAGEABLE - CLEAN* - AVAILABLE - DEPLOY* - ACTIVE.
Even if some of the states are non-op for a given driver, it's a clear
path).

Think about our API, it's not that we don't allow vendors to add every
new features they have to the core part of the API because we don't
trust them or we think that their shiny features are not worthy. We
don't do that to make it consistent, to have an abstraction layer that
will work the same for all types of hardware.

I mean it when I said I want to have a fresh mind to read the proposal
this new work flow. But I rather read a technical explanation than an
emotional one. What I want to know for example is what it will look
like when one register a node in ACTIVE state directly? What about the
internal driver fields? What about the TFTP/HTTP environment that is
built as part of the DEPLOY process ? What about the ports in Neutron
? and so on...


I agree with everything Lucas said.

I also want to point that it's completely unrealistic to expect even 
majority of Ironic users to have at least some idea about how Ironic 
actually works. And definitely not all our users are Ironic developers.


I routinely help people who never used Ironic before, and they don't 
have problems with running 1, 2, 10 commands, if they're written in the 
documentation and clearly explained. What they do have problems with is 
several ways of doing the same thing, with different ways being broken 
under different conditions.




Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] About logging-flexibility

2015-08-27 Thread Davanum Srinivas
Daisuke,

It's very late for merging these patches for Liberty. Sorry, they will have
to wait till M. We can talk more about it on next Monday's Oslo meeting.
Please let us know and i'll add a topic there if you can make it.

-- Dims

On Thu, Aug 27, 2015 at 1:04 AM, Fujita, Daisuke 
fuzita.dais...@jp.fujitsu.com wrote:

 Hello, Ihar and Doug and Oslo team members,

 I'm Daisuke Fujita,

 The reason why I am writing this email to you is I'd like you to do a code
 review.
 Please visit followings which I uploaded.

  https://review.openstack.org/#/c/216496/
  https://review.openstack.org/#/c/216506/
  https://review.openstack.org/#/c/216524/
  https://review.openstack.org/#/c/216551/

 So, these are suggestion for the following etherpad and spec.
  https://etherpad.openstack.org/p/logging-flexibility
  https://review.openstack.org/#/c/196752/


 Previously I talked with Ihar in Neutron-IRC[1] about the bug-report which
 I reported[2].

 As a result of what we discussed, I took these over.

 I'd like to merge this patch to Liberty-3, so, I'd appreciate it if you
 could cooperate.


 Thank you for your cooperation.

 Best Regards,
 Daisuke Fujita

 [1]
 http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2015-06-30.log.html

 http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2015-08-19.log.html
 (Please, find my IRC name Fdaisuke)

 [2] https://bugs.launchpad.net/neutron/+bug/1466476



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] About logging-flexibility

2015-08-27 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/27/2015 11:56 AM, Davanum Srinivas wrote:
 Daisuke,
 
 It's very late for merging these patches for Liberty. Sorry, they
 will have to wait till M. We can talk more about it on next
 Monday's Oslo meeting. Please let us know and i'll add a topic
 there if you can make it.
 

Not judging the cycle concern, I believe this should be a single patch
for oslo.log of LOC ~ 100-150 lines with tests.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJV3t+6AAoJEC5aWaUY1u572kQH/jSJOSbZbaK6M5ebptW/8i/E
MxsbhRCez/Iwl33ULMjWbTUWNZgFY9SgBqrddR6ueSnn/KXsVxWodVQ5RtMVa8Gc
VrtY7SpSQ7FFy0glC6tvGKkPHT44HOrXeZQ2b7hsA+bdH3s2Uwx/KJ1REcG+w4CY
l0JUtycTtbhHC5Rb7Z17J9Z/rYWUtbWiZp4Ez+7jUdGsHHtNfO36tGQcKgNApIJQ
Ns8qlXjwRo9yvOEwO+OhkR+i7FQDjzgmQLATNBehnq9hFfHe5mJy21U9pscIHiL7
qzUBQ/i9zhWuybrc4FNk/YHxg3CoryD5xkBzbBQglX0qTVSCMfkvMjRfxLxlKfs=
=k8ar
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] How to add vendor specific db tables in neutron

2015-08-27 Thread bharath

Hi,

Liberty code freeze is September 1st. But i have to add a vendor
specific table for liberty release . As Vendor specific alembic
support in neutron is  seems to be still under progress. Can i
simply add tables names in external.py under alembic_migration  and
push it upstream and implement actual tables later in vendor repo?

Thanks, bharath



On Thursday 27 August 2015 03:06 PM, Ihar Hrachyshka wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/27/2015 10:36 AM, bharath wrote:

Hi ,


I need to add a vendor specific db tables in neutron but vendor
specific are no more allowed in the neutron. Tables need to be
added to vendor repo itself. So i created alembic versioning in
vendor repo. and added new tables under vendor repo. But i am not
seeing tables getting created while stacking the devstack.

So how to trigger the tables creation in vendor repo?


http://docs.openstack.org/developer/neutron/devref/alembic_migrations.ht
ml#indepedent-sub-project-tables

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJV3tohAAoJEC5aWaUY1u57H+kH/jLD1bkLrwRoOVCi+rmonV+g
fuU15mritbUWag3dyg64GRMjGQ/aRFoP5D9HjATkSAa0wb7H5UlbAGfsE2PqHtrR
MOJ9l7ldJ1tAb5JS8Pti60uE0zEqv4dBEF2SmoXxRw88kN1WvUaiBtovBuIsfxwB
pm+3MIZH8AEBnBYIwnsTdU59lMPJgDKdfCU8WlgpewM5rxrtBAHANkrr+wCHYH2l
BfUgY+3mu+k4vKzravmgf29dDw8kzc68qXb+Z8IfyWbqadSoc8PhVke5DBf4utAw
tzqHGUpIHHBPUq0zLM6wwJsIJLAX33glJ00Sl8JeIMjh8RN/qZASZWOi+1CI9hc=
=zne4
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How many compute nodes supported by a basic controller and an advanced high end congigaration controller

2015-08-27 Thread Kamsali, RaghavendraChari (Artesyn)
Hi,

Can anyone answer:

How many compute nodes supported by a basic and an advanced high end hardware 
configuration OpenStack controller.

Here I want to know the OpenStack service performance.



Thanks and Regards,
Raghavendrachari kamsali,
Embedded Computing and Power,
Hyderabad, AndhraPradesh , India

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] periodic task

2015-08-27 Thread Gary Kotton


On 8/25/15, 2:43 PM, Andrew Laski and...@lascii.com wrote:

On 08/25/15 at 06:08pm, Gary Kotton wrote:


On 8/25/15, 9:10 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:



On 8/25/2015 10:03 AM, Gary Kotton wrote:


 On 8/25/15, 7:04 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 8/24/2015 9:32 PM, Gary Kotton wrote:
 In item #2 below the reboot is down via the guest and not the nova
 api¹s :)

 From: Gary Kotton gkot...@vmware.com mailto:gkot...@vmware.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Monday, August 24, 2015 at 7:18 PM
 To: OpenStack List openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [nova] periodic task

 Hi,
 A couple of months ago I posted a patch for bug
 https://launchpad.net/bugs/1463688. The issue is as follows: the
 periodic task detects that the instance state does not match the
state
 on the hypervisor and it shuts down the running VM. There are a
number
 of ways that this may happen and I will try and explain:

   1. Vmware driver example: a host where the instances are running
goes
  down. This could be a power outage, host failure, etc. The
first
  iteration of the perdioc task will determine that the actual
  instacne is down. This will update the state of the instance to
  DOWN. The VC has the ability to do HA and it will start the
instance
  up and running again. The next iteration of the periodic task
will
  determine that the instance is up and the compute manager will
stop
  the instance.
   2. All drivers. The tenant decides to do a reboot of the instance
and
  that coincides with the periodic task state validation. At this
  point in time the instance will not be up and the compute node
will
  update the state of the instance as DWON. Next iteration the
states
  will differ and the instance will be shutdown

 Basically the issue hit us with our CI and there was no CI running
for a
 couple of hours due to the fact that the compute node decided to
 shutdown the running instances. The hypervisor should be the source
of
 truth and it should not be the compute node that decides to shutdown
 instances. I posted a patch to deal with this
 https://review.openstack.org/#/c/190047/. Which is the reason for
this
 mail. The patch is backwards compatible so that the existing
deployments
 and random shutdown continues as it works today and the admin now
has
an
 ability just to do a log if there is a inconsistency.

 We do not want to disable the periodic task as knowing the current
state
 of the instance is very important and has a ton of value, we just do
not
 want the periodic to task to shut down a running instance.

 Thanks
 Gary




_
__
__
 _
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 In #2 the guest shouldn't be rebooted by the user (tenant) outside of
 the nova-api.  I'm not sure if it's actually formally documented in
the
 nova documentation, but from what I've always heard/known, nova is
the
 control plane and you should be doing everything with your instances
via
 the nova-api.  If the user rebooted via nova-api, the task_state
would
 be set and the periodic task would ignore the instance.

 Matt, this is one case that I showed where the problem occurs. There
are
 others and I can invest time to see them. The fact that the periodic
task
 is there is important. What I don¹t understand is why having an option
of
 log indication for an admin is something that is not useful and
instead
we
 are going with having the compute node shutdown instance when this
should
 not happen. Our infrastructure is behaving like cattle. That should
not
be
 the case and the hypervisor should be the source of truth.

 This is a serious issue and instances in production can and will go
down.


 --

 Thanks,

 Matt Riedemann



__
__
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
__
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


For the HA case #1, the periodic task checks to see if the instance.host
doesn't match the compute service host [1] and skips if they don't
match.

Shouldn't your HA scenario be updating which host the instance is
running on?  Or is this a vCenter-ism?

The nova compute node has not changed. It is not the 

Re: [openstack-dev] [Neutron] How to add vendor specific db tables in neutron

2015-08-27 Thread Anna Kamyshnikova
In external.py are stored names of table that was already created in
Neutron, but then there models were moved to vendor repos. So adding new
names in external.py won't help you.

On Thu, Aug 27, 2015 at 1:05 PM, bharath bhar...@brocade.com wrote:

 Hi,

 Liberty code freeze is September 1st. But i have to add a vendor
 specific table for liberty release . As Vendor specific alembic
 support in neutron is  seems to be still under progress. Can i
 simply add tables names in external.py under alembic_migration  and
 push it upstream and implement actual tables later in vendor repo?

 Thanks, bharath




 On Thursday 27 August 2015 03:06 PM, Ihar Hrachyshka wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 On 08/27/2015 10:36 AM, bharath wrote:

 Hi ,


 I need to add a vendor specific db tables in neutron but vendor
 specific are no more allowed in the neutron. Tables need to be
 added to vendor repo itself. So i created alembic versioning in
 vendor repo. and added new tables under vendor repo. But i am not
 seeing tables getting created while stacking the devstack.

 So how to trigger the tables creation in vendor repo?

 http://docs.openstack.org/developer/neutron/devref/alembic_migrations.ht
 ml#indepedent-sub-project-tables

 Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2

 iQEcBAEBCAAGBQJV3tohAAoJEC5aWaUY1u57H+kH/jLD1bkLrwRoOVCi+rmonV+g
 fuU15mritbUWag3dyg64GRMjGQ/aRFoP5D9HjATkSAa0wb7H5UlbAGfsE2PqHtrR
 MOJ9l7ldJ1tAb5JS8Pti60uE0zEqv4dBEF2SmoXxRw88kN1WvUaiBtovBuIsfxwB
 pm+3MIZH8AEBnBYIwnsTdU59lMPJgDKdfCU8WlgpewM5rxrtBAHANkrr+wCHYH2l
 BfUgY+3mu+k4vKzravmgf29dDw8kzc68qXb+Z8IfyWbqadSoc8PhVke5DBf4utAw
 tzqHGUpIHHBPUq0zLM6wwJsIJLAX33glJ00Sl8JeIMjh8RN/qZASZWOi+1CI9hc=
 =zne4
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] subunit2html location on images changing

2015-08-27 Thread Chris Dent

On Wed, 26 Aug 2015, Matthew Treinish wrote:


http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/functions.sh#n571


Is 'process_testr_artifacts' going to already be in scope for the
hook script or will it be necessary to source functions.sh to be
sure? If so, where is it?

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] How to add vendor specific db tables in neutron

2015-08-27 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/27/2015 12:05 PM, bharath wrote:
 Hi,
 
 Liberty code freeze is September 1st. But i have to add a vendor 
 specific table for liberty release . As Vendor specific alembic 
 support in neutron is  seems to be still under progress. Can i 
 simply add tables names in external.py under alembic_migration
 and push it upstream and implement actual tables later in vendor
 repo?
 

I suppose I miss something, but which limitation in alembic subproject
migrations support have you hit?

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJV3uvAAAoJEC5aWaUY1u572QgIAJt9V27RJWHNeFDua/mXVWYG
GFSIZ1kr331+YpB9aR7V2nLtW07eYZqfbDa/m7GfPyU/SntNbZeHAor9k6c9ZdOF
7EVHSIKH5C72n7+lagL1cEX/tNNlEKFhslIkQOnNuhcoXlH7Oqydb64VS8ApHx3T
RUbm1jeQ6DZQK001Nilvs3DJ/aeV/eDI/P2MIgrcvtULh/I+nw1nENc6S+8QfJcT
/2ccXfF/i2vw7NEi2T/2bEt5N2dae8ir7npY2N/QW+vD/1tCTLQuLY7bM/kbxC3j
AwtcMkANUSzz2y+YPRxwB+ZmX83wLZQVg3rkto/Gzi97au5Jpb4/bVR8Vi8U+ag=
=2kIL
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Removing unused dependencies like:'discover' module from all projects

2015-08-27 Thread Chandan kumar
Hello,

I am packaging 'discover' module 
https://bugzilla.redhat.com/show_bug.cgi?id=1251951  for RDO.
Since, this module is not maintained yet as per this
http://code.google.com/p/unittest-ext/
and this module is used as a test-dependencies in all the projects as per
'openstack-requirements' module 
https://github.com/openstack/requirements/blob/master/global-requirements.txt#L246
 and i have a discussion with lifeless regarding that 
https://github.com/testing-cabal/unittest-ext/issues/96 

and he has proposed a fix on that: https://review.openstack.org/#/c/217046/

Can someone confirms whether it is obsolete or not?
if it is obsolete, can we remove if it does not break any project?
so that i can create a bug to track it.

Needs input on that.

Thanks,

Chandan Kumar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade][cinder] Updates of rootwrap filters

2015-08-27 Thread Dulko, Michal
 -Original Message-
 From: Eric Harney [mailto:ehar...@redhat.com]
 Sent: Wednesday, August 26, 2015 5:15 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [grenade][cinder] Updates of rootwrap filters
 
 On 08/26/2015 09:57 AM, Dulko, Michal wrote:
  Hi,
 
  Recently when working on a simple bug [1] I've run into a need to change
 rootwrap filters rules for a few commands. After sending fix to Gerrit [2] it
 turns out that when testing the upgraded cloud grenade haven't copied my
 updated volume.filters file, and therefore failed the check. I wonder how
 should I approach the issue:
  1. Make grenade script for Cinder to copy the new file to upgraded cloud.
  2. Divide the patch into two parts - at first add new rules, leaving the old
 ones there, then fix the bug and remove old rules.
  3. ?
 
  Any opinions?
 
  [1] https://bugs.launchpad.net/cinder/+bug/1488433
  [2] https://review.openstack.org/#/c/216675/
 
 
 I believe you have to go with option 1 and add code to grenade to handle
 installing the new rootwrap filters.
 
 grenade is detecting an upgrade incompatibility that requires a config
 change, which is a good thing.  Splitting it into two patches will still 
 result in
 grenade failing, because it will test upgrading kilo to master, not patch A to
 patch B.
 
 Example for neutron:
 https://review.openstack.org/#/c/143299/
 
 A different example for nova (abandoned for unrelated reasons):
 https://review.openstack.org/#/c/151408/
 
 
 
 /me goes to investigate whether he can set the system locale to something
 strange in the full-lio job, because he really thought we had fixed all of the
 locale-related LVM parsing bugs by now.

Thanks, I've addressed that in following patch: 
https://review.openstack.org/#/c/217625/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Cloud Foundry service broker question

2015-08-27 Thread Nikolay Starodubtsev
Dmitry,
Does I understand properly and your recommendation is to change some murano
logic?



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-08-24 23:31 GMT+03:00 Dmitry mey...@gmail.com:

 I think that you can model application dependencies in a way it will allow
 multi-step provisioning and further  maintenance of each component. The
 example of such modeling could be seen in OASIS TOSCA.
 On Aug 24, 2015 6:19 PM, Nikolay Starodubtsev 
 nstarodubt...@mirantis.com wrote:

 Hi all,
 Today I and Stan Lagun discussed a question How we can provision complex
 murano app through Cloud Foundry?
 Here you can see logs from #murano related to this discussion:
 http://eavesdrop.openstack.org/irclogs/%23murano/%23murano.2015-08-24.log.html#t2015-08-24T09:53:01

 So, the only way we see now is to provision apps which have dependencies
 is step by step provisioning with manually updating JSON files each
 iteration. We appreaciate any ideas.
 Here is the link for review:
 https://review.openstack.org/#/c/196820/





 Nikolay Starodubtsev

 Software Engineer

 Mirantis Inc.


 Skype: dark_harlequine1

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] [infra] How to auto-generate stable release notes

2015-08-27 Thread Thierry Carrez
Doug Hellmann wrote:
 Excerpts from Robert Collins's message of 2015-08-19 11:04:37 +1200:
 Proposed data structure:
 - create a top level directory in each repo called release-notes
 - within that create a subdirectory called changes.
 - within the release-notes dir we place yaml files containing the
 release note inputs.
 - within the 'changes' subdirectory, the name of the yaml file will be
 the gerrit change id in a canonical form.
E.g. I1234abcd.yaml
This serves two purposes: it guarantees file name uniqueness (no
 merge conflicts) and lets us
determine which release to group it in (the most recent one, in
 case of merge+revert+merge patterns).
 
 We changed this to using a long enough random number as a prefix, with
 a slug value provided by the release note author to help identify what
 is in the file.
 
 I think maybe 8 hex digits for the prefix. Or should we go full UUID?
 
 The slug might be something like bug  or removing option foo
 which would be converted to a canonical form, removing whitespace in the
 filename. The slug can be changed, to allow for fixing typos, but the
 prefix needs to remain the same in order for the note to be recognized
 as the same item.

Random hex digit prefix feels overkill and not really more user-friendly
than ChangeIDs. Robert's initial proposal used ChangeIDs as a way to map
snippets to commits, and then to a given release. How would that work
with UUID+slugs ? Wondering if a one-directory-per-release structure
couldn't work instead.

 2) For each version: scan all the commits to determine gerrit change-id's.
  i) read in all those change ids .yaml files and pull out any notes within 
 them.
  ii) read in any full version yaml file (and merge in its contained notes)
  iii) Construct a markdown document as follows:
   a) Sort any preludes (there should be only one at most, but lets not
 error if there are multiple)
 
 Rather than sorting, which would change the order of notes as new items
 are added, what about listing them in an order based on when they were
 added in the history? We can track the first time a note file appears,
 so we can maintain the order even if a note is modified.

+1 That would match the way release notes are produced now (snippets
sorted in order of addition).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mock 1.3 breaking all of Kilo in Sid (and other cases of this kind)

2015-08-27 Thread Thomas Goirand
On 08/26/2015 09:21 PM, Robert Collins wrote:
 Now, the most annoying one is with testtools (ie: #796542). I'd
 appreciate having help on that one.

 Twisted's latest releases moved a private symbol that testtools
 unfortunately depends on.
 https://github.com/testing-cabal/testtools/pull/149 - and I just
 noticed now that Colin has added the test matrix we need, so we can
 merge this and get a release out this week.

 Hum... this is for the latest testtools, right? Could you help me with
 fixing testtools 0.9.39 in Sid, so that Kilo can continue to build
 there? Or is this too much work?
 
 Liberty will require a newer testtools, but the patch to twisted
 should be easily backportable - nothing else has changed near it (I
 just did an inspection of all the patches over the last year) - so it
 should trivially apply. [Except .travis.yml, which is irrelevant to
 Debian].

Thanks a lot for pointing to it. Indeed, the backport work was really
trivial, and now testtools 0.9.39 works perfectly in Sid. \o/

There's at least one issue which I know of: Horizon in Kilo needs
testtools, but it is also incompatible (ie: build-conflicts) with
python-unittest2, which is now a dependency of the newer testtools. So
upgrading to the latest testtools would make it impossible to build
Horizon with unit tests (which are by the way broken by mock 1.3).

 However, I'm not aware of any API breaks in testtools 1.8.0 that would
 affect kilo - we run the kilo tests in CI with latest testtools
 release; you may need the related dependencies updated as well though

Yes, which maybe is the problem (as per above). I prefer to not attempt
it right now, as it would be a lot of work.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] How to add vendor specific db tables in neutron

2015-08-27 Thread bharath

Hi ,


I need to add a vendor specific db tables in neutron but vendor specific
are no more allowed in the neutron. Tables need to be added to vendor
repo itself.
So i created alembic versioning in vendor repo. and added new tables
under vendor repo.
But i am not seeing tables getting created while stacking the devstack.

So how to trigger the tables creation in vendor repo?


Thanks,
bharath




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] How to add vendor specific db tables in neutron

2015-08-27 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/27/2015 10:36 AM, bharath wrote:
 Hi ,
 
 
 I need to add a vendor specific db tables in neutron but vendor
 specific are no more allowed in the neutron. Tables need to be
 added to vendor repo itself. So i created alembic versioning in
 vendor repo. and added new tables under vendor repo. But i am not
 seeing tables getting created while stacking the devstack.
 
 So how to trigger the tables creation in vendor repo?
 

http://docs.openstack.org/developer/neutron/devref/alembic_migrations.ht
ml#indepedent-sub-project-tables

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJV3tohAAoJEC5aWaUY1u57H+kH/jLD1bkLrwRoOVCi+rmonV+g
fuU15mritbUWag3dyg64GRMjGQ/aRFoP5D9HjATkSAa0wb7H5UlbAGfsE2PqHtrR
MOJ9l7ldJ1tAb5JS8Pti60uE0zEqv4dBEF2SmoXxRw88kN1WvUaiBtovBuIsfxwB
pm+3MIZH8AEBnBYIwnsTdU59lMPJgDKdfCU8WlgpewM5rxrtBAHANkrr+wCHYH2l
BfUgY+3mu+k4vKzravmgf29dDw8kzc68qXb+Z8IfyWbqadSoc8PhVke5DBf4utAw
tzqHGUpIHHBPUq0zLM6wwJsIJLAX33glJ00Sl8JeIMjh8RN/qZASZWOi+1CI9hc=
=zne4
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] L7 - Tasks

2015-08-27 Thread Evgeny Fedoruk
Hi All,

Here is L7 work tasks etherpad
https://etherpad.openstack.org/p/Neutron_LBaaS_v2_-_L7_work_tasks

Please review and comment

Evg


-Original Message-
From: Evgeny Fedoruk [mailto:evge...@radware.com] 
Sent: Wednesday, August 26, 2015 8:45 PM
To: Samuel Bercovici; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [neutron][lbaas] L7 - Tasks

Hi,

As Sam mentioned, I will join Octavia meeting today, hope there will be time to 
discuss L7 tasks
L7 related patches in review now are:
Extension https://review.openstack.org/#/c/148232
CLI https://review.openstack.org/#/c/217276
Reference implementation  https://review.openstack.org/#/c/204957

Evg



-Original Message-
From: Samuel Bercovici 
Sent: Wednesday, August 26, 2015 4:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Evgeny Fedoruk
Subject: RE: [openstack-dev] [neutron][lbaas] L7 - Tasks

Hi,

I think that Evgeny is trying to complete everything bedsides the reference 
implementation (API, CLI, Tempest, etc.).
Evgeny will join the Octavia IRC meeting so it could be a good opportunity to 
get status and sync activities.
As far as I know 8/31 is feature freeze and not code complete. Please correct 
me, if I am wrong.

-Sam.



-Original Message-
From: Eichberger, German [mailto:german.eichber...@hp.com] 
Sent: Wednesday, August 26, 2015 2:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] L7 - Tasks

Hi Evgeny,

Of course we would love to have L7 in Liberty but that window is closing on 
8/31. We usually monitor the progress (via Stephen) at the weekly Octavia 
meeting. Stephen indicated that we won't get it before the L3 deadline and with 
all the open items it might still be tight. I am wondering if you can advise on 
that.

Thanks,
German

From: Evgeny Fedoruk evge...@radware.commailto:evge...@radware.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, August 25, 2015 at 9:33 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron][lbaas] L7 - Tasks

Hello

I would like to know if there is a plan for L7 extension work for Liberty There 
is an extension patch-set here https://review.openstack.org/#/c/148232/
We will also need to do a CLI work which I started to do and will commit 
initial patch-set soon Reference implementation was started by Stephen here 
https://review.openstack.org/#/c/204957/
and tempest tests update should be done as well I do not know if it was 
discussed at IRC meetings.
Please share your thought about it.


Regards,
Evg


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [swift3] [s3] [ec2] What preferable way to implement S3 signature verification V4 in keystone projects?

2015-08-27 Thread Andrey Pavlov
Hi again,

Because was no answer to my questions then I have decided to choose and
implement first scenario.

So now I need to review my patchsets by community:
1) https://review.openstack.org/#/c/211933/
This is patchset for swift3 with new unit tests. It implements checking of
headers of signature v4 auth and preparation string to pass it to keystone.
2) https://review.openstack.org/#/c/215481/
This is patchset for keystone. It implements signature v4 calculation and
comparison with provided one.
3) https://review.openstack.org/#/c/215325/
This is patchset for devstack. It implements setting of region for checking
in swift3.

All new code is written in previous architecture style.
So please review these patchsets.

On Mon, Aug 17, 2015 at 3:52 PM, Andrey Pavlov andrey...@gmail.com wrote:

 Hi,

 I'm trying to support AWS signature version 4 for S3 requests.
 Related bugs:[1] for keystonemiddleware and [2] for swift3:

 Also keystone doesn't have support for V4 signature verification for S3
 (but it supports V4 for EC2 requests).

 Differences between V1 and V4 can be found here - V1: [3] and V4: [4].
 (Signature verification has several differences for EC2 and S3 requests)

 My question is - how to implement V4 signature verification?
 I have several scenarios:
 1) Leave current architecture. Swift3 will parse authorization info, will
 calculate StringToSign, will place it in 'X-Auth-Token'
 and place some additional header with signature version info. s3token will
 provide these values to keystone. keystone will
 calculate signature with V4 algorithm and check it.
 2) Same as first but without s3token - swift3 will send all info to
 keystone itself.
 3) Same as first but most authorization headers will be parsed by s3token
 and s3token will send to keystone.

 I prefer first scenario.
 But what think keystone team?


 Current implementation of S3 signature V1 verificatoin has several
 oddities for me:

 First oddity for me is in implementation of EC2 and S3 verification in
 keystone -
 ec2tokens (in keystone) takes all request parameters, calculates all that
 it needs, and checks
 calculated signature with user provided (Because only keystone can
 securely access secret_key
 by provided access_key). But signature calculation code is placed in
 keystoneclient...
 But s3tokens takes strange 'token' attribute (that calculated outside of
 keystone), access_key and signature.
 Then keystone hash token with secret_key (that was obtained from DB by
 access_key) and checks this result
 with provided signature.
 Oddity for me is in different algorithms for similar essences.

 Next oddity is in swift pipeline for S3 requests -
 at 'first' request with S3 params recognized by swift3 plugin. It checks
 authorization information,
 validates S3 parameters, calculates StringToSign as it described in [3]
 and places it in 'X-Auth-Token' header.
 at next step s3token from keystonemiddleware takes X-Auth-Token (that is a
 StringToSign) from header,
 sends it to keystone to check authorization.
 Oddity for me is in s3token that doesn't parse authorization information
 unlike ec2token from keystonemiddleware.

 [1] https://bugs.launchpad.net/keystonemiddleware/+bug/1473042
 [2] https://bugs.launchpad.net/swift3/+bug/1411078
 [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html

 [4]
 http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html

 --
 Kind regards,
 Andrey Pavlov.




-- 
Kind regards,
Andrey Pavlov.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Keystone resource naming with domain support - no '::domain' if 'Default'

2015-08-27 Thread Gilles Dubreuil


On 26/08/15 06:30, Rich Megginson wrote:
 This concerns the support of the names of domain scoped Keystone
 resources (users, projects, etc.) in puppet.
 
 At the puppet-openstack meeting today [1] we decided that
 puppet-openstack will support Keystone domain scoped resource names
 without a '::domain' in the name, only if the 'default_domain_id'
 parameter in Keystone has _not_ been set.  That is, if the default
 domain is 'Default'.  In addition:
 
 * In the OpenStack L release, if 'default_domain_id' is set, puppet will
 issue a warning if a name is used without '::domain'.
 * In the OpenStack M release, puppet will issue a warning if a name is
 used without '::domain', even if 'default_domain_id' is not set.
 * In N (or possibly, O), resource names will be required to have
 '::domain'.
 

+1

 The current spec [2] and current code [3] try to support names without a
 '::domain' in the name, in non-default domains, provided the name is
 unique across _all_ domains.  This will have to be changed in the
 current code and spec.
 

Ack

 
 [1]
 http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-08-25-15.01.html
 
 [2]
 http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/api-v3-support.html
 
 [3]
 https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_user/openstack.rb#L217
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >