Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Adrian Otto
Team,

I have published a top level blueprint for a magnum-horizon-plugin:

https://blueprints.launchpad.net/magnum/+spec/magnum-horizon-plugin

My suggestion is that any contributor interested in contributing to this 
feature should subscribe to that blueprint, and record their intent to 
contribute in the Whiteboard of the BP. Furthermore, I suggest that any 
contributors who are a good fit for core reviewer duties for this effort 
subscribe to the blueprint and mark themselves as “Participation Essential” so 
I can get a clear picture of how to deal with grouping the related core 
reviewer team (or adding them to the current core group).

I think that this effort would benefit from a spec submitted as a review using 
the following template:

http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/liberty-template.rst

Adapt it for magnum (I have not contributed a spec template of our own yet. 
TODO.)

Contribute it here:

http://git.openstack.org/cgit/openstack/magnum/tree/specs

Thanks,

Adrian

On Jun 4, 2015, at 12:58 PM, Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com wrote:

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Missing allowed address pairs?

2015-06-04 Thread Armando M.
You have better chances of getting an answer if you asked the -dev list and
add [Neutron] to the subject (done here).

That said, can you tell us a bit more about your deployment? You can also
hop on #openstack-neutron on Freenode to look for neutron developers who
can help you more interactively.

Cheers,
Armando

On 4 June 2015 at 14:03, Ken D'Ambrosio k...@jots.org wrote:

 Hi, all.  I've got two instances -- a Juno and an Icehouse -- both set up
 via Ubuntu/Juju.  And neither of them shows allowed address pairs when I
 do a neutron ext-list (I've tried on both the neutron-gateway and
 nova-cloud-controller).  From everything my co-worker and I have read, it
 seems like it *should* be in both of them, leading us to assume that we've
 somehow disabled that particular functionality.

 Any ideas on what to look at?

 Thanks much,

 -Ken

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Service Chain project IRC meeting minutes - 06/04/2015

2015-06-04 Thread Armando M.
On 4 June 2015 at 14:17, Cathy Zhang cathy.h.zh...@huawei.com wrote:

  Thanks for joining the service chaining meeting today! Sorry for the
 time confusion. We will correct the weekly meeting time to 1700UTC (10am
 pacific time) Thursday #openstack-meeting-4 on the OpenStack meeting
 page.




Cathy, thanks for driving this. I took the liberty to carry out one of the
actions identified in the meeting: the creation of repo to help folks
collaborate over code/documentation/testing etc [1]. As for the core team
definition, we'll start with a single member who can add new folks as more
docs/code gets poured in.

One question I had when looking at the minutes, was regarding slides [2].
Not sure if discussing deployment architectures when the API is still
baking is premature, but I wonder if you had given some thoughts into
having a pure agentless architecture even for the OVS path.

Having said that, as soon as the repo is up and running, I'd suggest to
move any relevant document (e.g. API proposal, use cases, etc) over to the
repo and reboot the review process so that everyone can be on the same page.

Cheers,
Armando

[1] https://review.openstack.org/#/c/188637/
[2]
https://docs.google.com/presentation/d/1SpVyLBCMRFBpMh7BsHmpENbSY6qh1s5NRsAS68ykd_0/edit#slide=id.p


  Meeting Minutes:
 http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.html

 Meeting Minutes (text):
 http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.txt

 Meeting Log:
 http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.log.html



 The next meeting is scheduled for June 11 (same place and time).



 Thanks,

 Cathy



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread niuzhenguo
Same here, I’m interested in helping out with reviews from Horizon standpoint.

Regards
Zhenguo Niu

From: David Lyle [mailto:dkly...@gmail.com]
Sent: Friday, June 05, 2015 3:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

I'm happy to provide reviews from the Horizon standpoint.

David

On Thu, Jun 4, 2015 at 12:55 PM, Jason Rist 
jr...@redhat.commailto:jr...@redhat.com wrote:
On 06/04/2015 11:58 AM, Steven Dake (stdake) wrote:
 Hey folks,

 I think it is critical for self-service needs that we have a Horizon 
 dashboard to represent Magnum.  I know the entire Magnum team has no 
 experience in UI development, but I have found atleast one volunteer Bradley 
 Jones to tackle the work.

 I am looking for more volunteers to tackle this high impact effort to bring 
 Containers to OpenStack either in the existing Magnum core team or as new 
 contributors.   If your interested, please chime in on this thread.

 As far as “how to get patches approved”, there are two models we can go with.

 Option #1:
 We add these UI folks to the magnum-core team and trust them not to +2/+A 
 Magnum infrastructure code.  This also preserves us as one team with one 
 mission.

 Option #2:
 We make a new core team magnum-ui-core.  This presents special problems if 
 the UI contributor team isn’t large enough to get reviews in.  I suspect 
 Option #2 will be difficult to execute.

 Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
 based upon the results.

 Regards
 -steve




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I am interested in helping as well.  In my experience, #1 works best,
but I'm not a core, so I'm not sure my wisdom is counted here.

-J

--
Jason E. Rist
Senior Software Engineer
OpenStack Infrastructure Integration
Red Hat, Inc.
openuc: +1.972.707.6408tel:%2B1.972.707.6408
mobile: +1.720.256.3933tel:%2B1.720.256.3933
Freenode: jrist
github/identi.cahttp://identi.ca: knowncitizen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-04 Thread Jay Lau
I have filed a bp for this
https://blueprints.launchpad.net/magnum/+spec/auto-generate-name Thanks

2015-06-04 14:14 GMT+08:00 Jay Lau jay.lau@gmail.com:

 Thanks Adrian, I see. Clear now.

 2015-06-04 11:17 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  Jay,

 On Jun 3, 2015, at 6:42 PM, Jay Lau jay.lau@gmail.com wrote:

   Thanks Adrian, some questions and comments in-line.

 2015-06-03 10:29 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

 I have reflected on this further and offer this suggestion:

  1) Add a feature to Magnum to auto-generate human readable names, like
 Docker does for un-named containers, and ElasticSearch does for naming
 cluster nodes. Use this feature if no name is specified upon the creation
 of a Bay or Baymodel.

 +1 on this


  -and-

  2) Add a configuration directives (default=FALSE) for
 allow_duplicate_bay_name and allow_duplicate_baymodel_name. If TRUE,
 duplicate named Bay and BayModel resources will be allowed, as they are
 today.

  This way, by default Magnum requires a unique name, and if none is
 specified, it will automatically generate a name. This way no additional
 burden is put on users who want to act on containers exclusively using
 UUIDs, and cloud operators can decide if they want to enforce name
 uniqueness or not.

  In the case of clouds that want to allow sharing access to a BayModel
 between multiple tenants (example: a global BayModel named “kubernetes”)
 with allow_duplicate_baymodel_name set to FALSE, a user will still be
 allowed to

 Here should be allow_duplicate_baymodel set to TRUE?


  I know this is confusing, but yes, what I wrote was correct. Perhaps I
 could rephrase it to clarify:

  Regardless of the setting of allow_duplicate_bay* settings, we should
 allow a user to create a BayModel with the same name as a global or shared
 one in order to override the one that already exists from another source
 with one supplied by the user. When referred to by name, the one created by
 the user would be selected in the case where each has the same name
 assigned.

 create a BayModel with the name “kubernetes” and it will override
 the global one. If a user-supplied BayModel is present with the same name
 as a global one, we shall automatically select the one owned by the tenant.

 +1 on this , one question is what does a global BayModel means? In
 Magnum, all BayModel belong to a tenant and seems there is no global
 BayModel?


  This is a concept we have not actually discussed, and we don't have
 today as a feature. The idea is that in addition to the BayModel resources
 that tenants create, we could also have ones that the cloud operator
 creates, and automatically expose to all tenants in the system. I am
 referring to these as global BayModel resources as a potential future
 enhancement.

  The rationale for such a global resource is a way for the Cloud
 Operator to pre-define the COE's they support, and pre-seed the Magnum
 environment with such a configuration for all users. Implementing this
 would require a solution for how to handle the ssh keypair, as one will
 need to be generated uniquely for every tenant. Perhaps we could have a
 procedure that a tenant uses to activate the BayModel by somehow adding
 their own public ssh key to a local subclass of it. Perhaps this could be
 implemented as a user defined BayModel that has a parent_id set to the uuid
 of a parent baymodel. When we instantiate one, we would merge the two into
 a single resource.

  All of this is about anticipating possible future features. The only
 reason I am mentioning this is that I want us to think about where we might
 go with resource sharing so that our name uniqueness decision does not
 preclude us from later going in this direction.

  Adrian


  About Sharing of BayModel Resources:

  Similarly, if we add features to allow one tenant to share a BayModel
 with another tenant (pending acceptance of the offered share), and
 duplicate names are allowed, then prefer in this order: 1) Use the resource
 owned by the same tenant, 2) Use the resource shared by the other tenant
 (post acceptance only), 3) Use the global resource. If duplicates exist in
 the same scope of ownership, then raise an exception requiring the use of a
 UUID in that case to resolve the ambiguity.

 We can file a bp to trace this.


  One expected drawback of this approach is that tools designed to
 integrate with one Magnum may not work the same with another Magnum if the
 allow_duplicate_bay* settings are changed from the default values on one
 but not the other. This should be made clear in the comments above the
 configuration directive in the example config file.

 Just curious why do we need this feature? Different Magnum clusters might
 using different CoE engine. So you are mentioning the case all of the
 Magnum clusters are using same  CoE engine? If so, yes, this should be made
 clear in configuration file.

   Adrian

   On Jun 2, 2015, at 8:44 PM, Jay Lau 

Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion guideline in API-WG

2015-06-04 Thread Adrian Otto

On Jun 4, 2015, at 11:03 AM, Devananda van der Veen 
devananda@gmail.commailto:devananda@gmail.com wrote:


On Jun 4, 2015 12:00 AM, Xu, Hejie 
hejie...@intel.commailto:hejie...@intel.com wrote:

 Hi, guys,

 I’m working on adding Microversion into the API-WG’s guideline which make 
 sure we have consistent Microversion behavior in the API for user.
 The Nova and Ironic already have Microversion implementation, and as I know 
 Magnum https://review.openstack.org/#/c/184975/ is going to implement 
 Microversion also.

 Hope all the projects which support( or plan to) Microversion can join the 
 review of guideline.

 The Mircoversion specification(this almost copy from nova-specs): 
 https://review.openstack.org/#/c/187112
 And another guideline for when we should bump Mircoversion 
 https://review.openstack.org/#/c/187896/

 As I know, there already have a little different between Nova and Ironic’s 
 implementation. Ironic return min/max version when the requested
 version doesn’t support in server by http-headers. There isn’t such thing in 
 nova. But that is something for version negotiation we need for nova also.
 Sean have pointed out we should use response body instead of http headers, 
 the body can includes error message. Really hope ironic team can take a
 look at if you guys have compelling reason for using http headers.

 And if we think return body instead of http headers, we probably need think 
 about back-compatible also. Because Microversion itself isn’t versioned.
 So I think we should keep those header for a while, does make sense?

 Hope we have good guideline for Microversion, because we only can change 
 Mircoversion itself by back-compatible way.

Ironic returns the min/max/current API version in the http headers for every 
request.

Why would it return this information in a header on success and in the body on 
failure? (How would this inconsistency benefit users?)

To be clear, I'm not opposed to *also* having a useful error message in the 
body, but while writing the client side of api versioning, parsing the range 
consistently from the response header is, IMO, better than requiring a 
conditional.

+1. I fully agree with Devananda on this point. Use the headers consistently, 
and add helpful errors into the body only as an addition to that behavior, not 
a substitute.

Adrian

-Deva

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Service Chain project IRC meeting minutes - 06/04/2015

2015-06-04 Thread Vikram Choudhary
Hi Cathy,

Thanks for heading up this meeting. No worries about the timing, time zones
are really difficult to handle ;)

I do agree with Armando that finalization of the API is important and must
be done at the earliest. As discussed over the last meeting I will start
working on this and hope by the next meeting we have something in the kitty.

Thanks
Vikram

On Fri, Jun 5, 2015 at 6:36 AM, Armando M. arma...@gmail.com wrote:


 On 4 June 2015 at 14:17, Cathy Zhang cathy.h.zh...@huawei.com wrote:

  Thanks for joining the service chaining meeting today! Sorry for the
 time confusion. We will correct the weekly meeting time to 1700UTC (10am
 pacific time) Thursday #openstack-meeting-4 on the OpenStack meeting
 page.




 Cathy, thanks for driving this. I took the liberty to carry out one of the
 actions identified in the meeting: the creation of repo to help folks
 collaborate over code/documentation/testing etc [1]. As for the core team
 definition, we'll start with a single member who can add new folks as more
 docs/code gets poured in.

 One question I had when looking at the minutes, was regarding slides [2].
 Not sure if discussing deployment architectures when the API is still
 baking is premature, but I wonder if you had given some thoughts into
 having a pure agentless architecture even for the OVS path.

 Having said that, as soon as the repo is up and running, I'd suggest to
 move any relevant document (e.g. API proposal, use cases, etc) over to the
 repo and reboot the review process so that everyone can be on the same page.

 Cheers,
 Armando

 [1] https://review.openstack.org/#/c/188637/
 [2]
 https://docs.google.com/presentation/d/1SpVyLBCMRFBpMh7BsHmpENbSY6qh1s5NRsAS68ykd_0/edit#slide=id.p


  Meeting Minutes:
 http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.html

 Meeting Minutes (text):
 http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.txt

 Meeting Log:
 http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.log.html



 The next meeting is scheduled for June 11 (same place and time).



 Thanks,

 Cathy



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][barbican] Regarding exposing X-Group-xxxx in token validation

2015-06-04 Thread John Wood
Hello folks,

Regarding option C, if group IDs are unique within a given cloud/context, and 
these are discoverable by clients that can then set the ACL on a secret in 
Barbican, then that seems like a viable option to me. As it is now, the user 
information provided to the ACL is the user ID information as found in 
X-User-Ids now, not user names.

To Kevin’s point though, are these group IDs unique across domains now, or in 
the future? If not the more complex tuples suggested could be used, but seem 
more error prone to configure on an ACL.

Thanks,
John

From: Fox, Kevin M kevin@pnnl.govmailto:kevin@pnnl.gov
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, June 4, 2015 at 6:01 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
X-Group- in token validation

In Juno I tried adding a user in Domain A to group in Domain B. That currently 
is not supported. Would be very handy though.

We're getting a ways from the original part of the thread, so I may have lost 
some context, but I think the original question was, if barbarian can add group 
names to their resource acls.

Since two administrative domains can issue the same group name, its not safe I 
believe.

Simply ensuring the group name is associated with a user and the domain for the 
user matches the domain for the group wouldn't work because someone with 
control of their own domain can just make a
user and give them the group with the name they want and come take your 
credentials.

What may be safe is for the barbican ACL to contain the group_id if they are 
uniqueue across all domains, or take a domain_id  group_name pair for the acl.

Thanks,
Kevin


From: Dolph Mathews [dolph.math...@gmail.commailto:dolph.math...@gmail.com]
Sent: Thursday, June 04, 2015 1:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
X-Group- in token validation

Problem! In writing a spec for this ( https://review.openstack.org/#/c/188564/ 
), I remembered that groups are domain-specific entities, which complicates the 
problem of providing X-Group-Names via middleware.

The problem is that we can't simply expose X-Group-Names to underlying services 
without either A) making a well-documented assumption about the ONE owning 
domain scope of ALL included groups, B) passing significantly more data to 
underlying services than just a list of names (a domain scope for every group), 
C) passing only globally-unique group IDs (services would then have to retrieve 
additional details about each from from keystone if they so cared).

Option A) More specifically, keystone could opt to enumerate the groups that 
belong to the same domain as the user. In this case, it'd probably make more 
sense from an API perspective if the groups enumeration were part of the 
user resources in the token response body (the user object already has a 
containing domain ID. That means that IF a user were to be assigned a group 
membership in another domain (assuming we didn't move to disallowing that 
behavior at some point), then it would have to be excluded from this list. If 
that were true, then I'd also follow that X-Group-Names become 
X-User-Group-Names, so that it might be more clear that they belong to the 
X-User-Domain-*.

Option B) This is probably the most complex solution, but also the most 
explicit. I have no idea how this interface would look in terms of headers 
using current conventions. If we're going to break conventions, then I'd want 
to pass a id+domain_id+name for each group reference. So, rather than including 
a list of names AND a list of IDs, we'd have some terribly encoded list of 
group objects (I'm not sure what the HTTP convention is on this sort of use 
case, and hoping someone can illustrate a better solution given the 
representation below):

  X-Groups: 
id%3D123%2Cdomain_id%3D456%2Cname%3Dabc,id%3D789%2Cdomain_id%3D357%2Cname%3Ddef

Option C) Federated tokens would actually require solution (C) today because 
they only include group IDs, not names. But the group enumeration in federated 
tokens was also only intended to be consumed by keystone, so that's not really 
an issue for that one use case. But option (C) would mean there are no 
X-Group-Names passed to services, just X-Group-Ids. I'm guessing this won't 
provide the user experience that Barbican is looking for?


I'm leaning towards solution (A), but curious if that'll work for Barbican 
and/or if anyone has an idea that I'm overlooking.


On Thu, Jun 4, 2015 at 8:18 AM, Dolph Mathews 
dolph.math...@gmail.commailto:dolph.math...@gmail.com wrote:
To clarify: we already have to include the groups 

Re: [openstack-dev] [nova] [glance] How to deal with aborted image read?

2015-06-04 Thread Chris Friesen

On 06/04/2015 07:16 PM, Joshua Harlow wrote:

Perhaps someone needs to use (or forgot to use) contextlib.closing?

https://docs.python.org/2/library/contextlib.html#contextlib.closing

Or contextlib2 also may be useful:

http://contextlib2.readthedocs.org/en/latest/#contextlib2.ExitStack


The complication is that the error happens in nova, but it's glance-api that 
holds the file descriptor open.


So somehow the error has to be detected in nova, then the fact that nova is no 
longer interested in the file needs to be propagated through glanceclient via an 
HTTP call into glance-api so that it knows it can close the file descriptor. 
And I don't think that an API to do this exists currently.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-04 Thread Adrian Otto
Thanks Jay. I have approved the direction for this, and have marked it as under 
discussion. I also added it to our next team meeting agenda so we can consider 
any alternate viewpoints before approving the specification as worded.

Adrian

On Jun 4, 2015, at 9:12 PM, Jay Lau 
jay.lau@gmail.commailto:jay.lau@gmail.com wrote:

I have filed a bp for this 
https://blueprints.launchpad.net/magnum/+spec/auto-generate-name Thanks

2015-06-04 14:14 GMT+08:00 Jay Lau 
jay.lau@gmail.commailto:jay.lau@gmail.com:
Thanks Adrian, I see. Clear now.

2015-06-04 11:17 GMT+08:00 Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com:
Jay,

On Jun 3, 2015, at 6:42 PM, Jay Lau 
jay.lau@gmail.commailto:jay.lau@gmail.com wrote:

Thanks Adrian, some questions and comments in-line.

2015-06-03 10:29 GMT+08:00 Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com:
I have reflected on this further and offer this suggestion:

1) Add a feature to Magnum to auto-generate human readable names, like Docker 
does for un-named containers, and ElasticSearch does for naming cluster nodes. 
Use this feature if no name is specified upon the creation of a Bay or Baymodel.
+1 on this

-and-

2) Add a configuration directives (default=FALSE) for allow_duplicate_bay_name 
and allow_duplicate_baymodel_name. If TRUE, duplicate named Bay and BayModel 
resources will be allowed, as they are today.

This way, by default Magnum requires a unique name, and if none is specified, 
it will automatically generate a name. This way no additional burden is put on 
users who want to act on containers exclusively using UUIDs, and cloud 
operators can decide if they want to enforce name uniqueness or not.

In the case of clouds that want to allow sharing access to a BayModel between 
multiple tenants (example: a global BayModel named “kubernetes”) with 
allow_duplicate_baymodel_name set to FALSE, a user will still be allowed to
Here should be allow_duplicate_baymodel set to TRUE?

I know this is confusing, but yes, what I wrote was correct. Perhaps I could 
rephrase it to clarify:

Regardless of the setting of allow_duplicate_bay* settings, we should allow a 
user to create a BayModel with the same name as a global or shared one in order 
to override the one that already exists from another source with one supplied 
by the user. When referred to by name, the one created by the user would be 
selected in the case where each has the same name assigned.

create a BayModel with the name “kubernetes” and it will override the global 
one. If a user-supplied BayModel is present with the same name as a global one, 
we shall automatically select the one owned by the tenant.
+1 on this , one question is what does a global BayModel means? In Magnum, 
all BayModel belong to a tenant and seems there is no global BayModel?

This is a concept we have not actually discussed, and we don't have today as a 
feature. The idea is that in addition to the BayModel resources that tenants 
create, we could also have ones that the cloud operator creates, and 
automatically expose to all tenants in the system. I am referring to these as 
global BayModel resources as a potential future enhancement.

The rationale for such a global resource is a way for the Cloud Operator to 
pre-define the COE's they support, and pre-seed the Magnum environment with 
such a configuration for all users. Implementing this would require a solution 
for how to handle the ssh keypair, as one will need to be generated uniquely 
for every tenant. Perhaps we could have a procedure that a tenant uses to 
activate the BayModel by somehow adding their own public ssh key to a local 
subclass of it. Perhaps this could be implemented as a user defined BayModel 
that has a parent_id set to the uuid of a parent baymodel. When we instantiate 
one, we would merge the two into a single resource.

All of this is about anticipating possible future features. The only reason I 
am mentioning this is that I want us to think about where we might go with 
resource sharing so that our name uniqueness decision does not preclude us from 
later going in this direction.

Adrian


About Sharing of BayModel Resources:

Similarly, if we add features to allow one tenant to share a BayModel with 
another tenant (pending acceptance of the offered share), and duplicate names 
are allowed, then prefer in this order: 1) Use the resource owned by the same 
tenant, 2) Use the resource shared by the other tenant (post acceptance only), 
3) Use the global resource. If duplicates exist in the same scope of ownership, 
then raise an exception requiring the use of a UUID in that case to resolve the 
ambiguity.
We can file a bp to trace this.

One expected drawback of this approach is that tools designed to integrate with 
one Magnum may not work the same with another Magnum if the 
allow_duplicate_bay* settings are changed from the default values on one but 
not the other. This should be made 

Re: [openstack-dev] [all] IRC meetings agenda is now driven from Gerrit !

2015-06-04 Thread Gareth
As we could generate data dynamically, it is possible to develop a web UI
for it which shows real-time and readable scheduler table, provides
download link of  custom meetings group, etc.

On Sat, May 30, 2015 at 2:51 PM yatin kumbhare yatinkumbh...@gmail.com
wrote:

 Thank you, Thierry  Tony

 --Daniel
 So could we publish the individual ical files for each meeting too.
 That way I could simply add nova.ical to my calendar and avoid other
 80 openstack meetings appearing in my schedule.

 this answers my second questions.

 Regards,
 Yatin

 On Sat, May 30, 2015 at 3:30 AM, Tony Breeds t...@bakeyournoodle.com
 wrote:

 On Fri, May 29, 2015 at 10:42:58AM +0530, yatin kumbhare wrote:
  Great Work!
 
  New meetings or meeting changes would be proposed in Gerrit, and
  check/gate tests would make sure that there aren't any conflict.
 
   --Will this tell upfront, that in any given day of week, which are all
  meeting slots (time and irc meeting channels) are available? this could
  bring down no. patchset and gate tests failures. may be :)

 That'd be an interetsing feature but in reality it's trivial to run
 yaml2ical
 locally against the irc-meetings repo (which you already need to propose
 the
 change).  Or you can just push the review see if it fails.

  There's some meeting with at weekly interval of 2, I assume we would get
  iCal by SUMMARY?

 I'm not sure I follow what you're asking here.

 Yours Tony.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Domain and Project naming

2015-06-04 Thread Jamie Lennox


- Original Message -
 From: Adam Young ayo...@redhat.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Thursday, 4 June, 2015 2:25:52 PM
 Subject: [openstack-dev] [Keystone] Domain and Project naming
 
 With Hierarchical Multitenantcy, we have the issue that a project is
 currentl restricted in its naming further than it should be.  The domain
 entity enforces that all project namess under the domain domain be
 unique, but really what we should say is that all projects under a
 single parent project be unique.  However, we have, at present, an API
 which allows a user to specify the domain either name or id and project
 again, either by name or ID, but here we care only about the name.  This
 can be used either in specifying the token, or in operations ion the
 project API.
 
 We should change projec naming to be nestable, and since we don't have a
 delimiter set, we should expect the names to be an array, where today we
 might have:
 
  project: {
  domain: {
  id: 1789d1,
  name: example.com
  },
  id: 263fd9,
  name: project-x
  }
 
 we should allow and expect:
 
  project: {
  domain: {
  id: 1789d1,
  name: example.com
  },
  id: 263fd9,
  name: [ grandpa, dad, daughter]
  }
 
 This will, of course, break Horizon and lots of other things, which
 means we need a reasonable way to display these paths.  The typical UI
 approach is a breadcrumb trail, and I think something where we put the
 segments of the path in the UI, each clickable, should be
 understandable: I'll defer to the UX experts if this is reasonable or not.
 
 The alternative is that we attempt to parse the project names. Since we
 have not reserved a delimeter, we will break someone somewhere if we
 force one on people.
 
 
 As an alternative, we should start looking in to following DNS standards
 for naming projects and hosts.  While a domain should not be required to
 be a DNS registred domain name, we should allow for the case where a
 user wants that to be the case, and to synchronize nam,ing across
 multiple clouds.  In order to enforce this, we would have to have an
 indicator on a domain name that it has been checked with DNS;  ideally,
 the user would add a special SRV or Text record or something that
 Keystone could use to confirm that the user has oked this domain name
 being used by this cloud...or something perhaps with DNSSEC, checking
 that auser has permission to assign a specific domain name to a set of
 resources in the cloud.  If we do that, the projects under that domain
 should also be valid DNS subzones, and the hosts either  FQDNs or some
 alternate record...this would tie in Well with Designate.
 
 Note that I am not saying force this  but rather allow this as it
 will simplify the naming when bursting from cloud to cloud:  the Domain
 and project names would then be synchronized via DNS regardless of
 hosting provider.
 
 As an added benefit, we could provide a SRV or TEXT record (or some new
 URL type..I heard one is coming) that describes where to find the home
 Keystone server for a specified domain...it would work nicely with the
 K2K strategy.
 
 If we go with DNS project naming, we can leave all project names in a
 flat string.
 
 
 Note that the DNS approach can work even if the user does not wish to
 register their own DNS.  A hosting provider (I'll pick dreamhost, cuz  I
 know they are listening)  could say the each of their tenants picks a
 user name...say that mine i admiyo,  they would then create a subdomain
 of admiyo.dreamcompute.dreamhost.com.  All of my subprojects would then
 get additional zones under that.  If I were then to burst from there to
 Bluebox, the Keystone domain name would be the one that I was assigned
 back at Dreamhost.

Back up. Is our current restrictions a problem?

Even with hierarchical projects is it a problem to say that a project name 
still must be unique per domain? I get that in theory you might want to be able 
to identify a nested project by name under other projects but that's not 
something we have to allow immediately.

I haven't followed the reseller case closely but in any situation where you had 
off control like that we are re-establishing a domain and so in a multitenancy 
situation each domain can still use their own project names. 

I feel like discussions around nested naming schemes and tieing domains to DNS 
is really premature until we have people that are actually using hierarchical 
projects. 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Standard way to indicate a partial blueprint implementation?

2015-06-04 Thread Chris Friesen

On 06/04/2015 05:23 PM, Zane Bitter wrote:

Ever since we established[1] a format for including metadata about bugs in Git
commit messages that included a 'Partial-Bug' tag, people have been looking for
a way to do the equivalent for partial blueprint implementations.


snip

 I have personally been using:


   Implements: partial-blueprint x

but I don't actually care much. I would also be fine with:

   Partially-Implements: blueprint x


If we need one, second one gets my vote.

I'm not sure we actually need one though.  Do we really care about the 
distinction between a commit that fully implements a blueprint and one that 
partially implements it?  I'd expect that most blueprints require multiple 
commits, and so blueprint X could implicitly be assumed to be a partial 
implementation in the common case.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] IRC meetings agenda is now driven from Gerrit !

2015-06-04 Thread Tony Breeds
On Fri, Jun 05, 2015 at 01:18:57AM +, Gareth wrote:
 As we could generate data dynamically, it is possible to develop a web UI
 for it which shows real-time and readable scheduler table, provides
 download link of  custom meetings group, etc.

Sure I'm working on the second part (albeit slowly).  I've been thinkign about
what tooling we could create to find conflicts easier.

Yours Tony.


pgpG7LUQukAFq.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] novnc console connection timetout

2015-06-04 Thread 张扬
Thanks folks, Its fixed via https://bugs.launchpad.net/mos/+bug/1409661 :-0

2015-06-04 20:29 GMT+08:00 张扬 w90p...@gmail.com:

 Hi Guys,

 I am a newbie on openstack, I deployed my first openstack env via Fuel
 6.0, but unfortunately I always could not access the vm's vnc console, it
 hints the connection time, I also found the following calltrace in
 nova-novncproxy.log,  Could anybody give me a hint on it? I am not sure
 whether its appropriate to send them to the mail list nor not, if not,
 please help me forward it to the right mail list. Thanks in advance! :-).

 BTW: I also tried to change the def_con_timeout, but it did not work for me
 The following is about the log information.

 controller: nova.conf

 root@node-9:~# cat /etc/nova/nova.conf
 [DEFAULT]
 dhcpbridge_flagfile=/etc/nova/nova.conf
 dhcpbridge=/usr/bin/nova-dhcpbridge
 logdir=/var/log/nova
 state_path=/var/lib/nova
 lock_path=/var/lock/nova
 force_dhcp_release=True
 iscsi_helper=tgtadm
 libvirt_use_virtio_for_bridges=True
 connection_type=libvirt
 root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
 verbose=True
 ec2_private_dns_show_ip=True
 api_paste_config=/etc/nova/api-paste.ini
 volumes_path=/var/lib/nova/volumes
 enabled_apis=ec2,osapi_compute
 flat_interface=eth2.103
 #debug=False
 debug=True
 log_dir=/var/log/nova
 network_manager=nova.network.manager.FlatDHCPManager
 amqp_durable_queues=False
 rabbit_hosts=10.0.21.5:5672
 quota_volumes=100
 notify_api_faults=False
 flat_network_bridge=br100
 resume_guests_state_on_host_boot=True
 memcached_servers=10.0.21.5:11211

 scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
 rabbit_use_ssl=False
 quota_ram=51200
 notification_driver=messaging
 max_io_ops_per_host=8
 quota_max_injected_file_content_bytes=102400
 s3_listen=0.0.0.0
 quota_driver=nova.quota.NoopQuotaDriver
 glance_api_servers=192.168.1.103:9292
 max_age=0
 quota_security_groups=10
 novncproxy_host=192.168.1.103
 rabbit_userid=nova
 rabbit_ha_queues=True
 rabbit_password=FMskSLdn
 report_interval=10
 scheduler_weight_classes=nova.scheduler.weights.all_weighers
 quota_cores=100
 reservation_expire=86400
 rabbit_virtual_host=/
 force_snat_range=0.0.0.0/0
 image_service=nova.image.glance.GlanceImageService
 use_cow_images=True
 quota_max_injected_files=50
 notify_on_state_change=vm_and_task_state
 scheduler_host_subset_size=30
 novncproxy_port=6080
 ram_allocation_ratio=1.0
 compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
 quota_security_group_rules=20
 disk_allocation_ratio=1.0
 quota_max_injected_file_path_bytes=4096
 quota_floating_ips=100
 quota_key_pairs=10
 scheduler_max_attempts=3
 cpu_allocation_ratio=8.0
 multi_host=True
 max_instances_per_host=50
 scheduler_available_filters=nova.scheduler.filters.all_filters
 public_interface=eth1
 service_down_time=60
 syslog_log_facility=LOG_LOCAL6
 quota_gigabytes=1000
 use_syslog_rfc_format=True
 quota_instances=100
 scheduler_host_manager=nova.scheduler.host_manager.HostManager
 notification_topics=notifications
 osapi_compute_listen=0.0.0.0
 ec2_listen=0.0.0.0
 volume_api_class=nova.volume.cinder.API
 service_neutron_metadata_proxy=False
 use_forwarded_for=False
 osapi_volume_listen=0.0.0.0
 metadata_listen=0.0.0.0
 auth_strategy=keystone
 ram_weight_multiplier=1.0
 keystone_ec2_url=http://10.0.21.5:5000/v2.0/ec2tokens
 quota_metadata_items=1024
 osapi_compute_workers=8
 rootwrap_config=/etc/nova/rootwrap.conf
 rpc_backend=nova.openstack.common.rpc.impl_kombu
 fixed_range=10.0.23.0/24
 use_syslog=True
 metadata_workers=8
 dhcp_domain=novalocal
 allow_resize_to_same_host=True
 flat_injected=False

 [DATABASE]
 max_pool_size=30
 max_retries=-1
 max_overflow=40

 [database]
 idle_timeout=3600
 connection=mysql://nova:LcHgm0PN@127.0.0.1/nova?read_timeout=60

 [keystone_authtoken]
 signing_dirname=/tmp/keystone-signing-nova
 signing_dir=/tmp/keystone-signing-nova
 auth_port=35357
 admin_password=FMxM1wqW
 admin_user=nova
 auth_protocol=http
 auth_host=10.0.21.5
 admin_tenant_name=services
 auth_uri=http://10.0.21.5:5000/

 [conductor]
 workers=8

 ===
 compute node: nova.conf

 root@node-8:~# cat /etc/nova/nova.conf
 [DEFAULT]
 notification_driver=ceilometer.compute.nova_notifier
 notification_driver=nova.openstack.common.notifier.rpc_notifier
 dhcpbridge_flagfile=/etc/nova/nova.conf
 dhcpbridge=/usr/bin/nova-dhcpbridge
 logdir=/var/log/nova
 state_path=/var/lib/nova
 lock_path=/var/lock/nova
 force_dhcp_release=True
 iscsi_helper=tgtadm
 libvirt_use_virtio_for_bridges=True
 connection_type=libvirt
 root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
 verbose=True
 ec2_private_dns_show_ip=True
 api_paste_config=/etc/nova/api-paste.ini
 volumes_path=/var/lib/nova/volumes
 enabled_apis=metadata
 flat_interface=eth2.103
 debug=False
 

[openstack-dev] What's Up, Doc? 5 June 2015

2015-06-04 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi everyone,

With our Kilo branch now cut, we're running headlong into the Liberty
development cycle! And don't forget to have your say on what the M
release should be called:
https://wiki.openstack.org/wiki/Release_Naming/M_Proposals

If you have content you would like to add, or you would like to be added
to the distribution list, please email me directly at
openst...@lanabrindley.com. I've also decided to start drafting these
'in the open' (as it were). If you want to see my early drafts and
typos, or check on old newsletters, you can do so here:
https://wiki.openstack.org/w/index.php?title=Documentation/WhatsUpDoc

== Progress towards Liberty ==

* RST conversion:
** Install Guide: there's now a spec awaiting review:
https://review.openstack.org/#/c/183138/
** Cloud Admin Guide: has been started! Get in touch with Brian or Joe,
or sign up here:
[[https://wiki.openstack.org/wiki/Documentation/Migrate#Cloud_Admin_Guide_Migration]]
** HA Guide: is also underway! Get in touch with Meg or Matt:
https://wiki.openstack.org/wiki/Documentation/HA_Guide_Update
* User Guides information architecture overhaul
** Waiting on the RST conversion of the Cloud Admin Guide to be complete
* Greater focus on helping out devs with docs in their repo
** No progress this week
* Improve how we communicate with and support our corporate contributors
** I met with Alison of Oracle to discuss how we can help out their team
* Improve communication with Cross Team Liaisons
** The current list of liaisons is available here:
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation
** I will be contacting CPLs shortly to discuss ways to improve

== Kilo Branching ==

Thanks to Andreas, Anne, and Tom for getting the Stable/Kilo branch
completed this week. With this, Kilo is now complete* and patches for
the Install Guide and the Config Reference will now be against Liberty
unless you specifically backport a change to Kilo. You can do this by
committing to master with the line ''Backport: Kilo'' in your commit
message. Remember that all other book are continuously integrated, and
this isn't necessary.

[*] Anne: feel free to go have a lie down ;)

== RST Migration ==

The next books we are focusing on for RST conversion are the Install
Guide, Cloud Admin Guide, and the HA Guide. If you would like to assist,
please get in touch with the appropriate speciality team.

* Install Guide:
** Contact Karin Levenstein karin.levenst...@rackspace.com
** Blueprint:
https://blueprints.launchpad.net/openstack-manuals/+spec/installguide-liberty
** Spec WIP: https://review.openstack.org/#/c/183138/

* Cloud Admin Guide:
** Contact Brian Moss kallimac...@gmail.com  Joseph Robinson
joseph.r.em...@gmail.com
** Blueprint:
https://blueprints.launchpad.net/openstack-manuals/+spec/reorganise-user-guides
** Sign up to help out here:
https://wiki.openstack.org/wiki/Documentation/Migrate#Cloud_Admin_Guide_Migration

* HA Guide
** Contact Meg McRoberts dreidellh...@yahoo.com or Matt Griffin
matt.grif...@percona.com
** Blueprint:
https://blueprints.launchpad.net/openstack-manuals/+spec/improve-ha-guide


For books that are now being converted, don't forget that any change you
make to the XML must also be made to the RST version until conversion is
complete. Our lovely team of cores will be keeping an eye out to make
sure loose changes to XML don't pass the gate, but try to help them out
by pointing out both patches in your reviews.

== New Core Reviewer Process ==

We did our first monthly round of Core Team review this week, and would
like to welcome our newest core team member, Darren Chan. We also bid a
fond farewell to Summer Long and Bryan Payne, who have been great core
team members and contributors, but have now moved on to other
adventures. We sincerely thank them both for their commitment over the
time they were core.

The new core process is documented here:
https://wiki.openstack.org/wiki/Documentation/HowTo#Achieving_core_reviewer_status
The next round will be around 1 July.

== Doc team meeting ==

The US meeting was held this week, see the minutes here to catch up:
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2015-06-03

Thanks Shilla for running the US team meeting!

Next meetings are:
APAC: Wednesday 10 June, 00:30:00 UTC
US: Wednesday 17 June, 14:00:00 UTC

Please go ahead and add any agenda items to the meeting page here:
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

Thanks!
Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVcQZ4AAoJELppzVb4+KUy7D4H/jgU9I9eC6MNSpCrW6AYMPsi
AKmBP+ZiTtXm3iVk1EapzhTUXYINJBtSgP8LfO2QpofqJdbU8KW9f4oIMno86W3G
1GpfGwoFDxLPDDvamxHNNgdihxQRojD7rZE+TspMZV0LU5O5vu3O1JsgNq4268H1
iLncfmfSDLFUU2qbLco6MuNqjK8kSG3U3GoafnPjJTQ4q4/hG5UW6iWSKeHAAay9
VnQ2e06mzOKFWGigcW4gZ8yTtW/kWmzuLcLsC8CAphely7Ru3jdwHH9BJsNdk7+V

Re: [openstack-dev] [Keystone] Domain and Project naming

2015-06-04 Thread Dolph Mathews
On Wed, Jun 3, 2015 at 11:25 PM, Adam Young ayo...@redhat.com wrote:

 With Hierarchical Multitenantcy, we have the issue that a project is
 currentl restricted in its naming further than it should be.  The domain
 entity enforces that all project namess under the domain domain be unique,
 but really what we should say is that all projects under a single parent
 project be unique.  However, we have, at present, an API which allows a
 user to specify the domain either name or id and project again, either by
 name or ID, but here we care only about the name.  This can be used either
 in specifying the token, or in operations ion the project API.

 We should change projec naming to be nestable, and since we don't have a
 delimiter set, we should expect the names to be an array, where today we
 might have:

 project: {
 domain: {
 id: 1789d1,
 name: example.com
 },
 id: 263fd9,
 name: project-x
 }

 we should allow and expect:

 project: {
 domain: {
 id: 1789d1,
 name: example.com
 },
 id: 263fd9,
 name: [ grandpa, dad, daughter]
 }


What is the actual project name here, and how do I specify it using my
existing OS_PROJECT_NAME environment variable?



 This will, of course, break Horizon and lots of other things, which means
 we need a reasonable way to display these paths.  The typical UI approach
 is a breadcrumb trail, and I think something where we put the segments of
 the path in the UI, each clickable, should be understandable: I'll defer to
 the UX experts if this is reasonable or not.

 The alternative is that we attempt to parse the project names. Since we
 have not reserved a delimeter, we will break someone somewhere if we force
 one on people.


 As an alternative, we should start looking in to following DNS standards
 for naming projects and hosts.  While a domain should not be required to be
 a DNS registred domain name, we should allow for the case where a user
 wants that to be the case, and to synchronize nam,ing across multiple
 clouds.  In order to enforce this, we would have to have an indicator on a
 domain name that it has been checked with DNS;  ideally, the user would add
 a special SRV or Text record or something that Keystone could use to
 confirm that the user has oked this domain name being used by this
 cloud...or something perhaps with DNSSEC, checking that auser has
 permission to assign a specific domain name to a set of resources in the
 cloud.  If we do that, the projects under that domain should also be valid
 DNS subzones, and the hosts either  FQDNs or some alternate record...this
 would tie in Well with Designate.

 Note that I am not saying force this  but rather allow this as it will
 simplify the naming when bursting from cloud to cloud:  the Domain and
 project names would then be synchronized via DNS regardless of hosting
 provider.

 As an added benefit, we could provide a SRV or TEXT record (or some new
 URL type..I heard one is coming) that describes where to find the home
 Keystone server for a specified domain...it would work nicely with the K2K
 strategy.

 If we go with DNS project naming, we can leave all project names in a flat
 string.


 Note that the DNS approach can work even if the user does not wish to
 register their own DNS.  A hosting provider (I'll pick dreamhost, cuz  I
 know they are listening)  could say the each of their tenants picks a user
 name...say that mine i admiyo,  they would then create a subdomain of
 admiyo.dreamcompute.dreamhost.com.  All of my subprojects would then get
 additional zones under that.  If I were then to burst from there to
 Bluebox, the Keystone domain name would be the one that I was assigned back
 at Dreamhost.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Service Chain project IRC meeting minutes - 06/04/2015

2015-06-04 Thread Armando M.
On 4 June 2015 at 19:32, Vikram Choudhary viks...@gmail.com wrote:

 Hi Cathy,

 Thanks for heading up this meeting. No worries about the timing, time
 zones are really difficult to handle ;)

 I do agree with Armando that finalization of the API is important and must
 be done at the earliest. As discussed over the last meeting I will start
 working on this and hope by the next meeting we have something in the kitty.


Are you going to use a polished up version of spec [1] (which needs a
rebase and a transition to the networking-sfc repo when that's ready)?

[1] https://review.openstack.org/#/c/177946/



 Thanks
 Vikram


 On Fri, Jun 5, 2015 at 6:36 AM, Armando M. arma...@gmail.com wrote:


 On 4 June 2015 at 14:17, Cathy Zhang cathy.h.zh...@huawei.com wrote:

  Thanks for joining the service chaining meeting today! Sorry for the
 time confusion. We will correct the weekly meeting time to 1700UTC (10am
 pacific time) Thursday #openstack-meeting-4 on the OpenStack meeting
 page.




 Cathy, thanks for driving this. I took the liberty to carry out one of
 the actions identified in the meeting: the creation of repo to help folks
 collaborate over code/documentation/testing etc [1]. As for the core team
 definition, we'll start with a single member who can add new folks as more
 docs/code gets poured in.

 One question I had when looking at the minutes, was regarding slides [2].
 Not sure if discussing deployment architectures when the API is still
 baking is premature, but I wonder if you had given some thoughts into
 having a pure agentless architecture even for the OVS path.

 Having said that, as soon as the repo is up and running, I'd suggest to
 move any relevant document (e.g. API proposal, use cases, etc) over to the
 repo and reboot the review process so that everyone can be on the same page.

 Cheers,
 Armando

 [1] https://review.openstack.org/#/c/188637/
 [2]
 https://docs.google.com/presentation/d/1SpVyLBCMRFBpMh7BsHmpENbSY6qh1s5NRsAS68ykd_0/edit#slide=id.p


  Meeting Minutes:
 http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.html

 Meeting Minutes (text):
 http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.txt

 Meeting Log:
 http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.log.html



 The next meeting is scheduled for June 11 (same place and time).



 Thanks,

 Cathy




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] How to deal with aborted image read?

2015-06-04 Thread Joshua Harlow

Perhaps someone needs to use (or forgot to use) contextlib.closing?

https://docs.python.org/2/library/contextlib.html#contextlib.closing

Or contextlib2 also may be useful:

http://contextlib2.readthedocs.org/en/latest/#contextlib2.ExitStack

-Josh

Chris Friesen wrote:

On 06/04/2015 03:01 AM, Flavio Percoco wrote:

On 03/06/15 16:46 -0600, Chris Friesen wrote:

We recently ran into an issue where nova couldn't write an image file
due to
lack of space and so just quit reading from glance.

This caused glance to be stuck with an open file descriptor, which
meant that
the image consumed space even after it was deleted.

I have a crude fix for nova at
https://review.openstack.org/#/c/188179/;
which basically continues to read the image even though it can't
write it.
That seems less than ideal for large images though.

Is there a better way to do this? Is there a way for nova to indicate to
glance that it's no longer interested in that image and glance can
close the
file?

If I've followed this correctly, on the glance side I think the code in
question is ultimately
glance_store._drivers.filesystem.ChunkedFile.__iter__().


Actually, to be honest, I was quite confused by the email :P

Correct me if I still didn't understand what you're asking.

You ran out of space on the Nova side while downloading the image and
there's a file descriptor leak somewhere either in that lovely (sarcasm)
glance wrapper or in glanceclient.


The first part is correct, but the file descriptor is actually held by
glance-api.


Just by reading your email and glancing your patch, I believe the bug
might be in glanceclient but I'd need to five into this. The piece of
code you'll need to look into is[0].

glance_store is just used server side. If that's what you meant -
glance is keeping the request and the ChunkedFile around - then yes,
glance_store is the place to look into.

[0]
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v1/images.py#L152



I believe what's happening is that the ChunkedFile code opens the file
and creates the iterator. Nova then starts iterating through the file.

If nova (or any other user of glance) iterates all the way through the
file then the ChunkedFile code will hit the finally clause in
__iter__() and close the file descriptor.

If nova starts iterating through the file and then stops (due to running
out of room, for example), the ChunkedFile.__iter__() routine is left
with an open file descriptor. At this point deleting the image will not
actually free up any space.

I'm not a glance guy so I could be wrong about the code. The
externally-visible data are:
1) glance-api is holding an open file descriptor to a deleted image file
2) If I kill glance-api the disk space is freed up.
3) If I modify nova to always finish iterating through the file the
problem doesn't occur in the first place.

Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Adam Young

On 06/04/2015 05:49 PM, Morgan Fainberg wrote:

Hi Everyone!

I've been reading through this thread and have had some conversations 
along the side and wanted to jump in to distill out what I think are 
the key points we are trying to address here. I'm going to outline 
about 4 items that seem to make sense to me regarding the evolution of 
policy. I also want to say that the notification that something has 
changed from the defaults in a way that may cause odd behavior to the 
side (the warning Sean was outlining); we can keep the desire to have 
those types of warnings for operators down the line (nothing that is 
being proposed here or what I'm going to outline will make it more or 
less difficult to add the functionality later on). This is not to say 
we wouldn't provide validation of an override, but a subjective this 
is a problematic policy configuration doesn't need to be directly 
part of this conversation today (it can happen once we know what the 
model of policy looks like going forward).


1. The first thing that I'm hearing from the conversation (this is 
based upon Sean's proposal) is that we already trust the individual 
projects to know the enforcement profile for their resources. It seems 
like the project should be authoritative on what that enforcement 
should look like. Handing off the enforcement definition to Keystone 
is the wrong direction. I really like the concept of defining within 
Nova the default policy that nova works with (we do this already 
today, and I don't want to require the nova team to come to Keystone 
to make changes to the policy down the line). The Projects are trusted 
to know what the enforcement points are and trusted to distribute a 
basic profile of enforcement.


To the end that the enforcement definition is handled by the 
individual projects, making it something that is more than a blob of 
text also makes a lot of sense. A code-like model that is easier to 
understand for the developers that are implementing enforcement would 
be useful. The key pieces are that this code-like-construct must be 
able to be serialized out into the common format.
The policy file format is JSON, which is the standard for all of the 
APIs in OpenStack thus far.


Second, this code-construct is just the basic level of truth, the 
idea is that the dynamic policy will provide the overrides - and 
*everything* can be overridden.  The code-like construct will also aid 
in profiling/testing the base/defaults (and then the dynamic policy 
overrides) without having to standup the entire stack. We can enable 
base functionality testing / validation and then the more integrated 
testing with the full stack (in different environments). This will 
enable more accurate and better base policy development by the teams 
we already trust to build the enforcement for a given project (e.g. Nova).
We already have this.  It is the default policy.json that each project 
keeps up to date.


Please don't suggest putting annotations on the coder and running a 
preprocessor.  That way leads to madness.  I see no reason to have a 
team write  policy in Python and then serialize to JSON.



--


The real current problem we have is this:  On a given API, we don't know 
where to look for the project ID and (almost all) policy needs to be 
enforced on the project scope.  What is required is for the base 
repository to have a document of how to set up the scoping for the call 
(token.proiject.id must match fetched_object.tenant_id), and we could 
mark that as dangerous to change.  What I would not want is to have 
the project hardcode the Role required.  Perhaps the API indicates one 
of two levels:  Admin vs Member, on an API, indicating the expected 
consumer of the API.  However, the current Policy file format represents 
this sufficiently.  We just need the Nova team to stay on top of this 
for Nova, and the other teams for their projects.


What I would love to be able to get from Nova is this api will end up 
calling these apis in Glance, Cinder, and neutron.  So we can properly 
delegate.





2. We will need a way to handle the bi-directional notifications for 
policy changes. This would encompass when a project is restarted and 
has a new code-policy construct and how that gets to Keystone. We also 
need to ensure that changes to the overrides are pushed down to the 
projects. This is not an easy canned solution today, but I am sure we 
can solve it. Likely this is tied to Keystone Middleware or something 
similar (I believe code is already in the works to this end).


I appreciate that Nova is growing, but I would not expect huge amounts 
of new policy code with each updtate...at this point, new policy would 
be deny until there is a new rule to handle it and upload these new 
rules.  Upgrading code is already part of an organizations deployment 
stategy, and trying to build in an additional two way notification 
system is more than we should be doing here.


If Nova feels that each micro 

Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-04 Thread Eric Windisch


I think this is perfectly fine, as long as it's reasonably large and
 the algorithm is sufficiently intelligent. The UUID algorithm is good at
 this, for instance, although it fails at readability. Docker's is not
 terribly great and could be limiting if you were looking to run several
 thousand containers on a single machine. Something better than Docker's
 algorithm but more readable than UUID could be explored.

  Also, something to consider is if this should also mean a change to the
 UUIDs themselves. You could use UUID-5 to create a UUID from your tenant's
 UUID and your unique name. The tenant's UUID would be the namespace, with
 the bay's name being the name field. The benefit of this is that clients,
 by knowing their tenant ID could automatically determine their bay ID,
 while also guaranteeing uniqueness (or as unique as UUID gets, anyway).


  Cool idea!

 I'm clear with the solution, but still have some questions: So we need to
 set the bay/baymodel name in the format of UUID-name format? Then if we get
 the tenant ID, we can use magnum bay-list | grep tenant-id or some
 other filter logic to get all the bays belong to the tenant?  By default,
 the magnum bay-list/baymodel-list will only show the bay/baymodels for
 one specified tenant.


The name would be an arbitrary string, but you would also have a
unique-identifier which is a UUID. I'm proposing the UUID could be
generated using the UUID5 algorithm which is basically sha1(tenant_id +
unique_name)  converted into a GUID. The Python uuid library can do this
easily, out of the box.

Taking from the dev-quickstart, I've changed the instructions for creating
a container according to how this could work using uuid5:

$ magnum create-bay --name swarmbay --baymodel testbaymodel
$  BAY_UUID=$(python -c import uuid; print
uuid.uuid5(uuid.UUID('urn:uuid:${TENANT_ID}'), 'swarmbay'))
$ cat  ~/container.json  END
{
bay_uuid: $BAY_UUID,
name: test-container,
image_id: cirros,
command: ping -c 4 8.8.8.8
}
END
$ magnum container-create  ~/container.json


The key difference in this example, of course, is that users would not need
to contact the server using bay-show in order to obtain the UUID of their
bay.

Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Steven Dake (stdake)
Hongbin,

I hadn’t thought of that, even though it seems obvious ;)  We can absolutely do 
that.  I almost like this option better then #1 but I don’t want the ui folks 
to feel like second class citizens.  This goes back to the trust the ui 
developers to not review things they know not about :)

Bradley,

How are other OpenStack projects handling the UI teams which clearly have a 
totally different specialty than the typical core team for a project with the 
Horizon big tent changes?

Regards
-steve


From: Hongbin Lu hongbin...@huawei.commailto:hongbin...@huawei.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, June 4, 2015 at 1:30 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Could we have a new group magnum-ui-core and include magnum-core as a subgroup, 
like the heat-coe-tempalte-core group.

Thanks,
Hongbin

From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: June-04-15 1:58 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need 
a vote from the core team

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Service Chain project IRC meeting minutes - 06/04/2015

2015-06-04 Thread Cathy Zhang
Thanks for joining the service chaining meeting today! Sorry for the time 
confusion. We will correct the weekly meeting time to 1700UTC (10am pacific 
time) Thursday #openstack-meeting-4 on the OpenStack meeting page.

Meeting Minutes:
http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.html
Meeting Minutes (text): 
http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.txt
Meeting Log:
http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.log.html

The next meeting is scheduled for June 11 (same place and time).

Thanks,
Cathy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-operators][openstack][chef] Pre-release of knife-openstack is out (1.2.0.rc1)

2015-06-04 Thread JJ Asghar
Hey everyone!

I have cut a new release of the knife-openstack[1] gem today. We have a couple 
new features[2] which has been asked for a while.

If you would like to give it a shot and report back any issues you might 
find[3].

gem install knife-openstack --pre

I’m hoping to give this a week or two to bake, then I’ll push it to master and 
a 1.2.0 release.

If you have any questions, thoughts, concerns please don’t hesitate to reach 
out!

[1]: https://rubygems.org/gems/knife-openstack 
https://rubygems.org/gems/knife-openstack
[2]: https://github.com/chef/knife-openstack/pull/165/files 
https://github.com/chef/knife-openstack/pull/165/files
[3]: https://github.com/chef/knife-openstack/issues 
https://github.com/chef/knife-openstack/issues

-JJ

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] 3 new guidelines entering freeze period

2015-06-04 Thread michael mccune
The API working group has three new guidelines that are entering the 1 
week freeze period. We would like to ask all PTLs and CPLs, and other 
interested parties, to take a look at these reviews. We will use lazy 
consensus at the end of the freeze to merge them if there are no objections.


Remember, these are guidelines, they are not meant to be taken as strict 
rules from on high that must be followed or else /ominous music. For 
further information about the API working group and it's purpose, please 
see the group wiki[1].


The three guidelines up for review are:

* Guidance on 500 internal server error, 
https://review.openstack.org/#/c/179365


* Clarification on state-conflicting requests, 
https://review.openstack.org/#/c/180094


* Clarification on when to use 409, https://review.openstack.org/#/c/179386


thanks,
mike

[1]: https://wiki.openstack.org/wiki/API_Working_Group#Purpose

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Hard Code Freeze status update - Just 4th 2PM PDT

2015-06-04 Thread Eugene Bogdanov

Hello everyone,

The Keystone patch passed staging tests, we are now building the ISO 
with this patch and other merged fixes. Test results for this ISO will 
be available tonight/early morning tomorrow PDT. If they are good, this 
will be our RC. So, our plan to declare HCF tomorrow morning/afternoon 
PDT (June 5th) looks realistic.


--
EugeneB



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Devananda van der Veen
On Jun 4, 2015 11:11 AM, Chris Friesen chris.frie...@windriver.com
wrote:

 On 06/04/2015 10:14 AM, Devananda van der Veen wrote:


 On Jun 4, 2015 8:57 AM, Monty Taylor mord...@inaugust.com


   So, seriously - let's grow up and start telling people that they do
not
   get to pick and choose user-visible feature sets. If they have an
unholy
   obsession with a particular backend technology that does not allow a
   public feature of the API to work, then they are deploying a broken
   cloud and they need to fix it.
  

 So I just had dinner last night with a very large user of OpenStack
(yes, they
 exist)  whose single biggest request is that we stop differentiating
in the
 API. To them, any difference in the usability / behavior / API between
OpenStack
 deployment X and Y is a serious enough problem that it will have two
effects:
 - vendor lock in
 - they stop using OpenStack
 And since avoiding single vendor lock in is important to them, well,
really it
 has only one result.

 Tl;Dr; Monty is right. We MUST NOT vary the API or behaviour
significantly or
 non-discoverably between clouds. Or we simply won't have users.


 If a vendor wants to differentiate themselves, what about having two
sets of API endpoints?  One that is full vanilla openstack with
bog-standard behaviour, and one that has vendor-specific stuff in it?

 That way the end-users that want interop can just use the standard API
and get common behaviour across clouds, while the end-users that want the
special sauce and are willing to lock in to a vendor to get it can use
the vendor-specific API.


You've just described what ironic has already done with the
/vendor_passthu/ end point.

However, the issue, more broadly, is just discovery of differences in the
API which make one cloud behave differently than another. Sometimes those
aren't related to vendor-specific features at all. Eg, changes which are
the result of config settings, or where a fix or feature gets back ported
(because sometimes someone thinks that's easier than a full upgrade). These
things exist today, but create a terrible experience for users who want to
move a workload between OpenStack clouds, and find that the APIs don't
behave quite the same, even for basic functionality.

-D

 Chris


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Morgan Fainberg
Hi Everyone!

I've been reading through this thread and have had some conversations along
the side and wanted to jump in to distill out what I think are the key
points we are trying to address here. I'm going to outline about 4 items
that seem to make sense to me regarding the evolution of policy. I also
want to say that the notification that something has changed from the
defaults in a way that may cause odd behavior to the side (the warning Sean
was outlining); we can keep the desire to have those types of warnings for
operators down the line (nothing that is being proposed here or what I'm
going to outline will make it more or less difficult to add the
functionality later on). This is not to say we wouldn't provide validation
of an override, but a subjective this is a problematic policy
configuration doesn't need to be directly part of this conversation today
(it can happen once we know what the model of policy looks like going
forward).

1. The first thing that I'm hearing from the conversation (this is based
upon Sean's proposal) is that we already trust the individual projects to
know the enforcement profile for their resources. It seems like the project
should be authoritative on what that enforcement should look like. Handing
off the enforcement definition to Keystone is the wrong direction. I really
like the concept of defining within Nova the default policy that nova works
with (we do this already today, and I don't want to require the nova team
to come to Keystone to make changes to the policy down the line). The
Projects are trusted to know what the enforcement points are and trusted to
distribute a basic profile of enforcement.

To the end that the enforcement definition is handled by the individual
projects, making it something that is more than a blob of text also makes
a lot of sense. A code-like model that is easier to understand for the
developers that are implementing enforcement would be useful. The key
pieces are that this code-like-construct must be able to be serialized out
into the common format. Second, this code-construct is just the basic
level of truth, the idea is that the dynamic policy will provide the
overrides - and *everything* can be overridden.  The code-like construct
will also aid in profiling/testing the base/defaults (and then the dynamic
policy overrides) without having to standup the entire stack. We can enable
base functionality testing / validation and then the more integrated
testing with the full stack (in different environments). This will enable
more accurate and better base policy development by the teams we already
trust to build the enforcement for a given project (e.g. Nova).
--

2. We will need a way to handle the bi-directional notifications for policy
changes. This would encompass when a project is restarted and has a new
code-policy construct and how that gets to Keystone. We also need to ensure
that changes to the overrides are pushed down to the projects. This is not
an easy canned solution today, but I am sure we can solve it. Likely this
is tied to Keystone Middleware or something similar (I believe code is
already in the works to this end).
--

3. The HA-Proxy mode of deployment with projects that can handle
no-downtime upgrades mean that we need to add in versioning into the policy
structures. The policy files for a Kilo vintage of Nova may (likely) will
be incompatible with Liberty nova. This means we cannot assume that
policy can be centralized easily even for a specific grouping of
api-services running as a single endpoint. This becomes an even more
important mechanism as we move towards more and more services with
microversioned APIs. It means it is totally reasonable to upgrade 1 or 2
nova APIs behind an HA Proxy since the new APIs will handle the old
microversion of the API.

This leads to needing policy to likewise be versioned. This also means that
only the service can be authoritative with the base-policy construct. This
means whatever tool we use for handling the overrides on the Keystone side
will need to be aware of policy versions as well. Having Keystone side
being exclusively authoratative for the entire policy makes development,
testing, and understanding of policy harder. This is another case of the
project itself should be in control of the base policy definition.
--

4. As a note that came out of the conversation I had with Sean, we should
look at no longer making the policy definition for an API keyed on the
intern-method name of a project. While instance_create is relatively
descriptive, there are many other API calls that you cannot really know
what changing the policy will do without trying it. Sean and I were
discussing moving towards supplying a representation of the URI path
instead of image_create. This is something that consumers and deployers
of OpenStack will be more familiar with. It also eliminates some of the
mapping needed to know what the URI of image create is when utilized in
the Horizon context (asking 

Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Bradley Jones (bradjone)
I will get a spec out for review by end of day tomorrow.

Thanks,
Brad Jones

On 4 Jun 2015, at 22:30, Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:

Team,

I have published a top level blueprint for a magnum-horizon-plugin:

https://blueprints.launchpad.net/magnum/+spec/magnum-horizon-plugin

My suggestion is that any contributor interested in contributing to this 
feature should subscribe to that blueprint, and record their intent to 
contribute in the Whiteboard of the BP. Furthermore, I suggest that any 
contributors who are a good fit for core reviewer duties for this effort 
subscribe to the blueprint and mark themselves as “Participation Essential” so 
I can get a clear picture of how to deal with grouping the related core 
reviewer team (or adding them to the current core group).

I think that this effort would benefit from a spec submitted as a review using 
the following template:

http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/liberty-template.rst

Adapt it for magnum (I have not contributed a spec template of our own yet. 
TODO.)

Contribute it here:

http://git.openstack.org/cgit/openstack/magnum/tree/specs

Thanks,

Adrian

On Jun 4, 2015, at 12:58 PM, Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com wrote:

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][barbican] Regarding exposing X-Group-xxxx in token validation

2015-06-04 Thread Dolph Mathews
Problem! In writing a spec for this (
https://review.openstack.org/#/c/188564/ ), I remembered that groups are
domain-specific entities, which complicates the problem of providing
X-Group-Names via middleware.

The problem is that we can't simply expose X-Group-Names to underlying
services without either A) making a well-documented assumption about the
ONE owning domain scope of ALL included groups, B) passing significantly
more data to underlying services than just a list of names (a domain scope
for every group), C) passing only globally-unique group IDs (services would
then have to retrieve additional details about each from from keystone if
they so cared).

Option A) More specifically, keystone could opt to enumerate the groups
that belong to the same domain as the user. In this case, it'd probably
make more sense from an API perspective if the groups enumeration were
part of the user resources in the token response body (the user object
already has a containing domain ID. That means that IF a user were to be
assigned a group membership in another domain (assuming we didn't move to
disallowing that behavior at some point), then it would have to be excluded
from this list. If that were true, then I'd also follow that X-Group-Names
become X-User-Group-Names, so that it might be more clear that they belong
to the X-User-Domain-*.

Option B) This is probably the most complex solution, but also the most
explicit. I have no idea how this interface would look in terms of headers
using current conventions. If we're going to break conventions, then I'd
want to pass a id+domain_id+name for each group reference. So, rather than
including a list of names AND a list of IDs, we'd have some terribly
encoded list of group objects (I'm not sure what the HTTP convention is on
this sort of use case, and hoping someone can illustrate a better solution
given the representation below):

  X-Groups:
id%3D123%2Cdomain_id%3D456%2Cname%3Dabc,id%3D789%2Cdomain_id%3D357%2Cname%3Ddef

Option C) Federated tokens would actually require solution (C) today
because they only include group IDs, not names. But the group enumeration
in federated tokens was also only intended to be consumed by keystone, so
that's not really an issue for that one use case. But option (C) would mean
there are no X-Group-Names passed to services, just X-Group-Ids. I'm
guessing this won't provide the user experience that Barbican is looking
for?


I'm leaning towards solution (A), but curious if that'll work for Barbican
and/or if anyone has an idea that I'm overlooking.


On Thu, Jun 4, 2015 at 8:18 AM, Dolph Mathews dolph.math...@gmail.com
wrote:

 To clarify: we already have to include the groups produced as a result of
 federation mapping **in the payload** of Fernet tokens so that scoped
 tokens can be created later:


 https://github.com/openstack/keystone/blob/a637ebcbc4a92687d3e80a50cbe88df3b13c79e6/keystone/token/providers/fernet/token_formatters.py#L523

 These are OpenStack group IDs, so it's up to the deployer to keep those
 under control to keep Fernet token sizes down. It's the only place in the
 current Fernet implementation that's (somewhat alarmingly) unbounded in the
 real world.

 But we do **not** have a use case to add groups to *all* Fernet payloads:
 only to token creation  validation responses.


 On Thu, Jun 4, 2015 at 2:36 AM, Morgan Fainberg morgan.fainb...@gmail.com
  wrote:

 For Fernet, the groups would only be populated on validate as Dolph
 outlined. They would not be added to the core payload. We do not want to
 expand the payload in this manner.

 --Morgan

 Sent via mobile

 On Jun 3, 2015, at 21:51, Lance Bragstad lbrags...@gmail.com wrote:

 I feel if we allowed group ids to be an attribute of the Fernet's core
 payload, we continue to open up the possibility for tokens to be greater
 than the initial acceptable size limit for a Fernet token (which I
 believe was 255 bytes?). With this, I think we need to provide guidance on
 the number of group ids allowed within the token before that size limit is
 compromised.

 We've landed patches recently that allow for id strings to be included in
 the Fernet payload [0], regardless of being uuid format (which can be
 converted to bytes before packing to save space, this is harder for us to
 do with non-uuid format id strings). This can also cause the Fernet token
 size to grow. If we plan to include more information in the Fernet token
 payload I think we should determine if the original acceptable size limit
 still applies and regardless of what that size limit is provide some sort
 of best practices for helping deployments keep their token size as small
 as possible.


 Keeping the tokens user (and developer) friendly was a big plus in the
 design of Fernet, and providing resource for deployments to maintain that
 would be helpful.


 [0]
 https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:bug/1459382,n,z

 On Wed, Jun 3, 2015 at 10:19 

Re: [openstack-dev] [Neutron] L3 agent rescheduling issue

2015-06-04 Thread Itsuro ODA
Hi,

 After trying to reproduce this, I'm suspecting that the issue is actually
 on the server side from failing to drain the agent report state queue in
 time.

I have seen before.
I thought the senario at that time as follows.
* a lot of create/update resource API issued 
* rpc_conn_pool_size pool exhausted for sending notify and blocked
  farther sending side of RPC.
* rpc_thread_pool_size pool exhausted by waiting rpc_conn_pool_size
  pool for replying RPC.
* receiving state_report is blocked because rpc_thread_pool_size pool
  exhausted.

Thanks
Itsuro Oda

On Thu, 4 Jun 2015 14:20:33 -0700
Kevin Benton blak...@gmail.com wrote:

 After trying to reproduce this, I'm suspecting that the issue is actually
 on the server side from failing to drain the agent report state queue in
 time.
 
 I set the report_interval to 1 second on the agent and added a logging
 statement and I see a report every 1 second even when sync_routers is
 taking a really long time.
 
 On Thu, Jun 4, 2015 at 11:52 AM, Carl Baldwin c...@ecbaldwin.net wrote:
 
  Ann,
 
  Thanks for bringing this up.  It has been on the shelf for a while now.
 
  Carl
 
  On Thu, Jun 4, 2015 at 8:54 AM, Salvatore Orlando sorla...@nicira.com
  wrote:
   One reason for not sending the heartbeat from a separate greenthread
  could
   be that the agent is already doing it [1].
   The current proposed patch addresses the issue blindly - that is to say
   before declaring an agent dead let's wait for some more time because it
   could be stuck doing stuff. In that case I would probably make the
   multiplier (currently 2x) configurable.
  
   The reason for which state report does not occur is probably that both it
   and the resync procedure are periodic tasks. If I got it right they're
  both
   executed as eventlet greenthreads but one at a time. Perhaps then adding
  an
   initial delay to the full sync task might ensure the first thing an agent
   does when it comes up is sending a heartbeat to the server?
  
   On the other hand, while doing the initial full resync, is the  agent
  able
   to process updates? If not perhaps it makes sense to have it down until
  it
   finishes synchronisation.
 
  Yes, it can!  The agent prioritizes updates from RPC over full resync
  activities.
 
  I wonder if the agent should check how long it has been since its last
  state report each time it finishes processing an update for a router.
  It normally doesn't take very long (relatively) to process an update
  to a single router.
 
  I still would like to know why the thread to report state is being
  starved.  Anyone have any insight on this?  I thought that with all
  the system calls, the greenthreads would yield often.  There must be
  something I don't understand about it.
 
  Carl
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Kevin Benton

-- 
Itsuro ODA o...@valinux.co.jp


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 agent rescheduling issue

2015-06-04 Thread Carl Baldwin
On Thu, Jun 4, 2015 at 3:20 PM, Kevin Benton blak...@gmail.com wrote:
 After trying to reproduce this, I'm suspecting that the issue is actually on
 the server side from failing to drain the agent report state queue in time.

 I set the report_interval to 1 second on the agent and added a logging
 statement and I see a report every 1 second even when sync_routers is taking
 a really long time.

Very good insight.  That makes a lot more sense to me.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-04 Thread Clint Adams
On Wed, Jun 03, 2015 at 04:30:17PM -0400, Sean Dague wrote:
 The closer we can get logic about what a service should look like on
 disk back into that service itself, the less work duplicated by any of
 the installers, and the more common OpenStack envs would be. The fact
 that every installer / package needs to copy in a bunch of etc files
 (because the python packages don't do it) has always seemed rather odd
 to me.

I agree with this, and given the disparate and seemingly contradictory
goals driving these discussions, I think it will be exceedingly
difficult to make everyone happy. So here's my suggestion:

1. Maintain all important data for packaging as first-class members of
   the respective repositories. Initscripts, systemd service files,
   licensing (SPDX?), and so on should be in the master branch of each
   project. Just enough glue should be present such that functional
   packaging can be programmatically generated from HEAD with debdry or
   similar tooling, and this is how test jobs should operate (f.ex.: run
   debdry, mangle version number to something unique, build package in
   chroot or equivalent, store output for use in other testing).

2. Create a single repository, with one subdirectory per source
   package, in which overrides or patches to third-party packaging can
   be committed and used to trigger builds.

This way OpenStack could

-  produce and consume its own package artifacts for testing changes of
   varying complexity
-  get early warning of changes which will break packaging, insanity in
   dependency graphs, and so on
-  be a central point of collaboration without introducing a bunch of
   repo duplication or bypass of code review

Distributors could

-  clone the upstream repos and branch to add any special tweaks, then
   potentially run the same automation to get source/binary packages
-  collaborate upstream to fix things of common utility


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Rebranded Volume Drivers

2015-06-04 Thread Mike Perez
Sounds like the community would like CI's regardless, and I agree.

Just because the driver code works for one backend solution, doesn't
mean it's going to work with some other.

Lets continue with code reviews with these patches only if they have a
CI reporting, unless someone has a compelling reason we should not let
any rebranded drivers in.

--
Mike Perez


On Wed, Jun 3, 2015 at 10:32 AM, Mike Perez thin...@gmail.com wrote:
 There are a couple of cases [1][2] I'm seeing where new Cinder volume
 drivers for Liberty are rebranding other volume drivers. This involves
 inheriting off another volume driver's class(es) and providing some
 config options to set the backend name, etc.

 Two problems:

 1) There is a thought of no CI [3] is needed, since you're using
 another vendor's driver code which does have a CI.

 2) IMO another way of satisfying a check mark of being OpenStack
 supported and disappearing from the community.

 What gain does OpenStack get from these kind of drivers?

 Discuss.

 [1] - https://review.openstack.org/#/c/187853/
 [2] - https://review.openstack.org/#/c/187707/4
 [3] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers

 --
 Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Rebranded Volume Drivers

2015-06-04 Thread Alex Meade
Agreed, I'd also like to mention that rebranded arrays may differ slightly
in functionality as well so the CIs would need to run against a physical
rebranded device. These differences also justify the need for letting
rebranded drivers in.

-Alex

On Thu, Jun 4, 2015 at 4:41 PM, Mike Perez thin...@gmail.com wrote:

 Sounds like the community would like CI's regardless, and I agree.

 Just because the driver code works for one backend solution, doesn't
 mean it's going to work with some other.

 Lets continue with code reviews with these patches only if they have a
 CI reporting, unless someone has a compelling reason we should not let
 any rebranded drivers in.

 --
 Mike Perez


 On Wed, Jun 3, 2015 at 10:32 AM, Mike Perez thin...@gmail.com wrote:
  There are a couple of cases [1][2] I'm seeing where new Cinder volume
  drivers for Liberty are rebranding other volume drivers. This involves
  inheriting off another volume driver's class(es) and providing some
  config options to set the backend name, etc.
 
  Two problems:
 
  1) There is a thought of no CI [3] is needed, since you're using
  another vendor's driver code which does have a CI.
 
  2) IMO another way of satisfying a check mark of being OpenStack
  supported and disappearing from the community.
 
  What gain does OpenStack get from these kind of drivers?
 
  Discuss.
 
  [1] - https://review.openstack.org/#/c/187853/
  [2] - https://review.openstack.org/#/c/187707/4
  [3] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers
 
  --
  Mike Perez

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Adrian Otto
Jason,

I will definitely weigh all feedback on this. I’m interested in guidance from 
anyone taking an interest in this subject.

Thanks,

Adrian

 On Jun 4, 2015, at 1:55 PM, Jason Rist jr...@redhat.com wrote:
 
 On 06/04/2015 11:58 AM, Steven Dake (stdake) wrote:
 Hey folks,
 
 I think it is critical for self-service needs that we have a Horizon 
 dashboard to represent Magnum.  I know the entire Magnum team has no 
 experience in UI development, but I have found atleast one volunteer Bradley 
 Jones to tackle the work.
 
 I am looking for more volunteers to tackle this high impact effort to bring 
 Containers to OpenStack either in the existing Magnum core team or as new 
 contributors.   If your interested, please chime in on this thread.
 
 As far as “how to get patches approved”, there are two models we can go with.
 
 Option #1:
 We add these UI folks to the magnum-core team and trust them not to +2/+A 
 Magnum infrastructure code.  This also preserves us as one team with one 
 mission.
 
 Option #2:
 We make a new core team magnum-ui-core.  This presents special problems if 
 the UI contributor team isn’t large enough to get reviews in.  I suspect 
 Option #2 will be difficult to execute.
 
 Cores, please vote on Option #1, or Option #2, and Adrian can make a 
 decision based upon the results.
 
 Regards
 -steve
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 I am interested in helping as well.  In my experience, #1 works best,
 but I'm not a core, so I'm not sure my wisdom is counted here.
 
 -J
 
 -- 
 Jason E. Rist
 Senior Software Engineer
 OpenStack Infrastructure Integration
 Red Hat, Inc.
 openuc: +1.972.707.6408
 mobile: +1.720.256.3933
 Freenode: jrist
 github/identi.ca: knowncitizen
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 agent rescheduling issue

2015-06-04 Thread Kevin Benton
After trying to reproduce this, I'm suspecting that the issue is actually
on the server side from failing to drain the agent report state queue in
time.

I set the report_interval to 1 second on the agent and added a logging
statement and I see a report every 1 second even when sync_routers is
taking a really long time.

On Thu, Jun 4, 2015 at 11:52 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 Ann,

 Thanks for bringing this up.  It has been on the shelf for a while now.

 Carl

 On Thu, Jun 4, 2015 at 8:54 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  One reason for not sending the heartbeat from a separate greenthread
 could
  be that the agent is already doing it [1].
  The current proposed patch addresses the issue blindly - that is to say
  before declaring an agent dead let's wait for some more time because it
  could be stuck doing stuff. In that case I would probably make the
  multiplier (currently 2x) configurable.
 
  The reason for which state report does not occur is probably that both it
  and the resync procedure are periodic tasks. If I got it right they're
 both
  executed as eventlet greenthreads but one at a time. Perhaps then adding
 an
  initial delay to the full sync task might ensure the first thing an agent
  does when it comes up is sending a heartbeat to the server?
 
  On the other hand, while doing the initial full resync, is the  agent
 able
  to process updates? If not perhaps it makes sense to have it down until
 it
  finishes synchronisation.

 Yes, it can!  The agent prioritizes updates from RPC over full resync
 activities.

 I wonder if the agent should check how long it has been since its last
 state report each time it finishes processing an update for a router.
 It normally doesn't take very long (relatively) to process an update
 to a single router.

 I still would like to know why the thread to report state is being
 starved.  Anyone have any insight on this?  I thought that with all
 the system calls, the greenthreads would yield often.  There must be
 something I don't understand about it.

 Carl

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Standard way to indicate a partial blueprint implementation?

2015-06-04 Thread Zane Bitter
Ever since we established[1] a format for including metadata about bugs 
in Git commit messages that included a 'Partial-Bug' tag, people have 
been looking for a way to do the equivalent for partial blueprint 
implementations. A non-exhaustive search of a small number of projects 
reveals at least the following formats in use:


partial blueprint: x
Implements: partial blueprint x
Implements: partial-blueprint x
Partial-Blueprint: x
part of blueprint x
partial blueprint x
Implements: blueprint x (partial)
Partially implements: blueprint x
Partially Implements: blueprint x
Partially-Implements: blueprint x
Partial implements blueprint x
Partially-Implements-Blueprint: x
Part of blueprint x
Partial-implements: blueprint x
Partial-Implements: blueprint x
partially implement: blueprint x

No guidance is available on the wiki page.[2] Clearly the regex doesn't 
care so long as it sees the word blueprint followed by the blueprint 
name. I have personally been using:


  Implements: partial-blueprint x

but I don't actually care much. I would also be fine with:

  Partially-Implements: blueprint x

I do think it should have a colon so that it fits the format of the rest 
of the sign-off stanza, I don't think it should have any spaces before 
the colon for the same reason, and ideally it would have 'Implements' in 
there somewhere for consistency.


Sahara folks have documented a standard,[3] but it fails on some of 
those criteria and in any event they haven't actually been following it.


Can we agree on and document a standard way of doing this?

Yes, someone -1'd my patch for this. How did you guess?

cheers,
Zane.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2013-August/012945.html

[2] https://wiki.openstack.org/wiki/GitCommitMessages
[3] https://wiki.openstack.org/wiki/Sahara/GitCommits

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Alex Chan (alexc2)
Thanks for the details Brad!  I would also be interested in helping out with 
the work.

From: Bradley Jones (bradjone) bradj...@cisco.commailto:bradj...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, June 4, 2015 at 12:59 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

It’s difficult to put a quantifiable figure on how big a work item this is but 
I can try to give an overview of the steps required.

The first step will be to extend the Horizon API to include CRUD operations 
that we need to perform with Magnum. Assuming that there are no issues here and 
API changes/additions are not required at this point, we can begin to flesh out 
how we would like the UI to look. We will aim to  reduce the amount of Magnum 
specific UI code that will need to be maintained by reusing components from 
Horizon. This will also speed up the development significantly.

In version 1 of Magnum UI a user should be able to perform all normal 
interactions with Magnum through the UI with no need for interaction with the 
python client.

Future versions of Magnum UI would include admin specific views and any 
additional Magnum specific UI components we may want to add (maybe some 
visualisations).

That is a brief overview of my vision for this effort and I believe that 
version 1 should comfortably be achievable this release cycle.

Thanks,
Brad Jones

On 4 Jun 2015, at 19:49, Brad Topol 
bto...@us.ibm.commailto:bto...@us.ibm.com wrote:

How big a work item is this?


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.commailto:bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:Thai Q Tran/Silicon Valley/IBM@IBMUS
To:OpenStack Development Mailing List \(not for usage questions\) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date:06/04/2015 02:20 PM
Subject:Re: [openstack-dev] [magnum][horizon] Making a dashboard for 
Magnum - need a vote from the core team




I am interested but not sure how much time I have this release cycle. I can 
take on a more hands-off approach and help review to make sure that magnum-ui 
is align with future horizon directions.

-Steven Dake (stdake) std...@cisco.commailto:std...@cisco.com wrote: 
-
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
From: Steven Dake (stdake) std...@cisco.commailto:std...@cisco.com
Date: 06/04/2015 11:03AM
Subject: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need 
a vote from the core team

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Adrian Otto
Steve,

Thanks for raising this. I am definitely looking forward to a UI effort for 
Magnum, and will be happy to decide based on community input about how best to 
organize this. I’d also like direct input from those contributors who plan to 
work on this to take those individual preferences into account. If you are not 
comfortable voicing your concern on the ML, then you may contact me 
individually, and I will post a summary based on any consensus we form as a 
team.

Adrian

On Jun 4, 2015, at 3:35 PM, Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com wrote:

Hongbin,

I hadn’t thought of that, even though it seems obvious ;)  We can absolutely do 
that.  I almost like this option better then #1 but I don’t want the ui folks 
to feel like second class citizens.  This goes back to the trust the ui 
developers to not review things they know not about :)

Bradley,

How are other OpenStack projects handling the UI teams which clearly have a 
totally different specialty than the typical core team for a project with the 
Horizon big tent changes?

Regards
-steve


From: Hongbin Lu hongbin...@huawei.commailto:hongbin...@huawei.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, June 4, 2015 at 1:30 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Could we have a new group magnum-ui-core and include magnum-core as a subgroup, 
like the heat-coe-tempalte-core group.

Thanks,
Hongbin

From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: June-04-15 1:58 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need 
a vote from the core team

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-04 Thread Thomas Goirand
Hi Clint,

Thanks for your contribution to this thread.

On 06/04/2015 10:35 PM, Clint Adams wrote:
 On Wed, Jun 03, 2015 at 04:30:17PM -0400, Sean Dague wrote:
 The closer we can get logic about what a service should look like on
 disk back into that service itself, the less work duplicated by any of
 the installers, and the more common OpenStack envs would be. The fact
 that every installer / package needs to copy in a bunch of etc files
 (because the python packages don't do it) has always seemed rather odd
 to me.
 
 I agree with this, and given the disparate and seemingly contradictory
 goals driving these discussions, I think it will be exceedingly
 difficult to make everyone happy.

I don't think anyone involved in the packaging of OpenStack has
expressed disparity or contradiction. Quite the opposite: it is my
strong opinion that all parties involved are on the same page.

 So here's my suggestion:
 
 1. Maintain all important data for packaging as first-class members of
the respective repositories. Initscripts, systemd service files,
licensing (SPDX?), and so on should be in the master branch of each
project.

Are you here saying we should move startup scripts upstream, and not on
the packaging repos? If so, that's a bad idea. Let me explain.

The init scripts used to be hard to maintain because they were many, but
since Debian  Ubuntu are using automatic generation out of a tiny
template (with sysv-rc, systemd and upstart all supported), this is a
problem solved.

If individual projects are getting into the business of publishing their
own startup scripts, I doubt that we would even use them, because having
a central place to patch all of the startup scripts at once (ie:
openstack-pkg-tools) is much better than having to maintain each and
every startup script by hand.

As for the licensing, I agree here. I have multiple times express my
opinion about it: the project as a hole needs to make some progress, as
it's nearly impossible for downstream distributions to second guess who
is the copyright holders (please pay attention: copyright holders and
licensing are 2 separate things...).

Though SPDX?!? Do you know the famous quote (Jay Pipes) Get your dirty
XML out of my Json ? :) And there's also the fact that Debian uses a
different parseable format. Not sure what the RPM folks use, but I
suppose that's embedded in a .spec file. Also, all of OpenStack is
mostly using the Apache-2.0 license, the licensing issues are with
(build-)dependencies, and that, we can't solve it.

Just enough glue should be present such that functional
packaging can be programmatically generated from HEAD with debdry
or similar tooling, and this is how test jobs should operate (f.ex.: run
debdry, mangle version number to something unique, build package in
chroot or equivalent, store output for use in other testing).

I already use pkgos-debpypi (from openstack-pkg-tools) to automate a big
chunk of the Python module packaging. Though the automated thing can
only drive you so far. It wont fix missing dependencies in
requirement.txt, wrong lower bounds, SSLv3 removal related issues, and
all of this kind of issues which we constantly need to address to make
sure all unit tests are passing. The thing is, unit  functional tests
in OpenStack are designed for Devstack. I supposed you already saw the
It worked on Devstack XKCD t-shirt... Well, I'm sure all package
maintainers of OpenStack have this in mind every day! :)

As you are a DD, you know it too: getting a correct debian/copyright
also represent some work, which cannot be automated, or the FTP masters
will really hate you. :P

Plus there's also the fact that sometimes, distributions don't use the
matching versions. For example, Debian Jessie was released with Django
1.7 and OpenStack Icehouse, and I had to work on patching Horizon to
make it work with it (at the time, Horizon didn't work with something
higher than 1.6). The same thing happened with SQLAlchemy.

This has slowed down a little bit over the years, but we also used to
spend most of our time simply packaging new dependencies. We already
have a big amount of redundancy (alembic vs sqlalchemy-migrate, WSGI
frameworks, nose vs testr, pymysql vs mysqlclient vs mysql-python, you
name it...). And each new project, with its specificities, bring new
dependencies. I used to say that 80% of the packaging work is spent
there: packaging new stuff. These days, it has a bit shifted to
packaging oslo and client libs, which is a good thing (as they are
respecting a standard, so we have less surprise). Though what actually
represent an OpenStack release has grown bigger. After Jessie was
released, uploading all of Kilo to Sid took me about 3 or 4 days, and
maybe about the same amount of time to upload it to the official Jessie
backports (all done in dependency order, with as few dependency breakage
as possible...).

I really hope that the effort on having a gate on the lower bounds of
our 

Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Bradley Jones (bradjone)
From a quick look around there doesn’t seem to be a consensus on how to deal 
with UI teams, some projects seem to use just the main projects core team 
(group based policy ui), others use a mix of the project core team and the 
Horizon core team (monasca-ui) and in the case of tuskar-ui they use just the 
Horizon core team. I can’t find a case of another project having a dedicated 
ui-core team but as I say it was a quick look through the project config acls 
so I may have missed some.

My two cents are that it is important that the people working on this effort 
have some level of control over the code base in order for things to progress 
quickly and to that end either approach #1, #2 or Hongbin’s suggestions 
achieves that goal. So either of the options work for me from that point of 
view and the decision really has to come down to what will work best in getting 
current Magnum folks participating in all aspects of the ui design, 
implementation and particularly reviewing.

It is also important to have Horizon cores participate in reviews to ensure 
that we don’t diverge too much from upstream changes in Horizon, it sounds like 
we already have that commitment for reviews from David and Thai so thanks guys 
:)

Brad Jones

On 4 Jun 2015, at 21:35, Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com wrote:

Hongbin,

I hadn’t thought of that, even though it seems obvious ;)  We can absolutely do 
that.  I almost like this option better then #1 but I don’t want the ui folks 
to feel like second class citizens.  This goes back to the trust the ui 
developers to not review things they know not about :)

Bradley,

How are other OpenStack projects handling the UI teams which clearly have a 
totally different specialty than the typical core team for a project with the 
Horizon big tent changes?

Regards
-steve


From: Hongbin Lu hongbin...@huawei.commailto:hongbin...@huawei.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, June 4, 2015 at 1:30 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Could we have a new group magnum-ui-core and include magnum-core as a subgroup, 
like the heat-coe-tempalte-core group.

Thanks,
Hongbin

From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: June-04-15 1:58 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need 
a vote from the core team

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][barbican] Regarding exposing X-Group-xxxx in token validation

2015-06-04 Thread Fox, Kevin M
In Juno I tried adding a user in Domain A to group in Domain B. That currently 
is not supported. Would be very handy though.

We're getting a ways from the original part of the thread, so I may have lost 
some context, but I think the original question was, if barbarian can add group 
names to their resource acls.

Since two administrative domains can issue the same group name, its not safe I 
believe.

Simply ensuring the group name is associated with a user and the domain for the 
user matches the domain for the group wouldn't work because someone with 
control of their own domain can just make a
user and give them the group with the name they want and come take your 
credentials.

What may be safe is for the barbican ACL to contain the group_id if they are 
uniqueue across all domains, or take a domain_id  group_name pair for the acl.

Thanks,
Kevin


From: Dolph Mathews [dolph.math...@gmail.com]
Sent: Thursday, June 04, 2015 1:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
X-Group- in token validation

Problem! In writing a spec for this ( https://review.openstack.org/#/c/188564/ 
), I remembered that groups are domain-specific entities, which complicates the 
problem of providing X-Group-Names via middleware.

The problem is that we can't simply expose X-Group-Names to underlying services 
without either A) making a well-documented assumption about the ONE owning 
domain scope of ALL included groups, B) passing significantly more data to 
underlying services than just a list of names (a domain scope for every group), 
C) passing only globally-unique group IDs (services would then have to retrieve 
additional details about each from from keystone if they so cared).

Option A) More specifically, keystone could opt to enumerate the groups that 
belong to the same domain as the user. In this case, it'd probably make more 
sense from an API perspective if the groups enumeration were part of the 
user resources in the token response body (the user object already has a 
containing domain ID. That means that IF a user were to be assigned a group 
membership in another domain (assuming we didn't move to disallowing that 
behavior at some point), then it would have to be excluded from this list. If 
that were true, then I'd also follow that X-Group-Names become 
X-User-Group-Names, so that it might be more clear that they belong to the 
X-User-Domain-*.

Option B) This is probably the most complex solution, but also the most 
explicit. I have no idea how this interface would look in terms of headers 
using current conventions. If we're going to break conventions, then I'd want 
to pass a id+domain_id+name for each group reference. So, rather than including 
a list of names AND a list of IDs, we'd have some terribly encoded list of 
group objects (I'm not sure what the HTTP convention is on this sort of use 
case, and hoping someone can illustrate a better solution given the 
representation below):

  X-Groups: 
id%3D123%2Cdomain_id%3D456%2Cname%3Dabc,id%3D789%2Cdomain_id%3D357%2Cname%3Ddef

Option C) Federated tokens would actually require solution (C) today because 
they only include group IDs, not names. But the group enumeration in federated 
tokens was also only intended to be consumed by keystone, so that's not really 
an issue for that one use case. But option (C) would mean there are no 
X-Group-Names passed to services, just X-Group-Ids. I'm guessing this won't 
provide the user experience that Barbican is looking for?


I'm leaning towards solution (A), but curious if that'll work for Barbican 
and/or if anyone has an idea that I'm overlooking.


On Thu, Jun 4, 2015 at 8:18 AM, Dolph Mathews 
dolph.math...@gmail.commailto:dolph.math...@gmail.com wrote:
To clarify: we already have to include the groups produced as a result of 
federation mapping **in the payload** of Fernet tokens so that scoped tokens 
can be created later:

  
https://github.com/openstack/keystone/blob/a637ebcbc4a92687d3e80a50cbe88df3b13c79e6/keystone/token/providers/fernet/token_formatters.py#L523

These are OpenStack group IDs, so it's up to the deployer to keep those under 
control to keep Fernet token sizes down. It's the only place in the current 
Fernet implementation that's (somewhat alarmingly) unbounded in the real world.

But we do **not** have a use case to add groups to *all* Fernet payloads: only 
to token creation  validation responses.


On Thu, Jun 4, 2015 at 2:36 AM, Morgan Fainberg 
morgan.fainb...@gmail.commailto:morgan.fainb...@gmail.com wrote:
For Fernet, the groups would only be populated on validate as Dolph outlined. 
They would not be added to the core payload. We do not want to expand the 
payload in this manner.

--Morgan

Sent via mobile

On Jun 3, 2015, at 21:51, Lance Bragstad 
lbrags...@gmail.commailto:lbrags...@gmail.com wrote:

I feel if we allowed group ids to be an 

Re: [openstack-dev] [Neutron] L3 agent rescheduling issue

2015-06-04 Thread Eugene Nikanorov
I doubt it's a server side issue.
Usually there are plenty of rpc workers to drain much higher amount of rpc
messages going from agents.
So the issue could be in 'fairness' on L3 agent side. But from my
observations it was more an issue of DHCP agent than L3 agent due to
difference in resource processing.

Thanks,
Eugene.


On Thu, Jun 4, 2015 at 4:29 PM, Itsuro ODA o...@valinux.co.jp wrote:

 Hi,

  After trying to reproduce this, I'm suspecting that the issue is actually
  on the server side from failing to drain the agent report state queue in
  time.

 I have seen before.
 I thought the senario at that time as follows.
 * a lot of create/update resource API issued
 * rpc_conn_pool_size pool exhausted for sending notify and blocked
   farther sending side of RPC.
 * rpc_thread_pool_size pool exhausted by waiting rpc_conn_pool_size
   pool for replying RPC.
 * receiving state_report is blocked because rpc_thread_pool_size pool
   exhausted.

 Thanks
 Itsuro Oda

 On Thu, 4 Jun 2015 14:20:33 -0700
 Kevin Benton blak...@gmail.com wrote:

  After trying to reproduce this, I'm suspecting that the issue is actually
  on the server side from failing to drain the agent report state queue in
  time.
 
  I set the report_interval to 1 second on the agent and added a logging
  statement and I see a report every 1 second even when sync_routers is
  taking a really long time.
 
  On Thu, Jun 4, 2015 at 11:52 AM, Carl Baldwin c...@ecbaldwin.net
 wrote:
 
   Ann,
  
   Thanks for bringing this up.  It has been on the shelf for a while now.
  
   Carl
  
   On Thu, Jun 4, 2015 at 8:54 AM, Salvatore Orlando sorla...@nicira.com
 
   wrote:
One reason for not sending the heartbeat from a separate greenthread
   could
be that the agent is already doing it [1].
The current proposed patch addresses the issue blindly - that is to
 say
before declaring an agent dead let's wait for some more time because
 it
could be stuck doing stuff. In that case I would probably make the
multiplier (currently 2x) configurable.
   
The reason for which state report does not occur is probably that
 both it
and the resync procedure are periodic tasks. If I got it right
 they're
   both
executed as eventlet greenthreads but one at a time. Perhaps then
 adding
   an
initial delay to the full sync task might ensure the first thing an
 agent
does when it comes up is sending a heartbeat to the server?
   
On the other hand, while doing the initial full resync, is the  agent
   able
to process updates? If not perhaps it makes sense to have it down
 until
   it
finishes synchronisation.
  
   Yes, it can!  The agent prioritizes updates from RPC over full resync
   activities.
  
   I wonder if the agent should check how long it has been since its last
   state report each time it finishes processing an update for a router.
   It normally doesn't take very long (relatively) to process an update
   to a single router.
  
   I still would like to know why the thread to report state is being
   starved.  Anyone have any insight on this?  I thought that with all
   the system calls, the greenthreads would yield often.  There must be
   something I don't understand about it.
  
   Carl
  
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 
  --
  Kevin Benton

 --
 Itsuro ODA o...@valinux.co.jp


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] bp updates spec maintenance

2015-06-04 Thread Andrew Woodward
Since the meeting was closed prior to having a discussion for this topic, I
will post my notes here as well.

I will start updating some of the BP's that have landed in 6.1 to reflect
their current status

There are a number of specs open for BP's that have code landed. We will
merge the current revisions of theses specs, if there are any other issues,
or revisions we will need to open a new CR for it and likely on 7.0

For specs open that didn't make 6.1. I will push revisions to move them to
7.0

For specs that landed that code didn't make 6.1, I will create reviews for
moving them to 7.0

Lastly, I will start to update bp's that are already planned to target to
7.0. If you have something that should be targeted please raise it on the
ML.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Sean Dague
On 06/04/2015 01:03 PM, Adam Young wrote:
 On 06/04/2015 09:40 AM, Sean Dague wrote:
 So I feel like I understand the high level dynamic policy end game. I
 feel like what I'm proposing for policy engine with encoded defaults
 doesn't negatively impact that. I feel there is a middle chunk where
 perhaps we've got different concerns or different dragons that we see,
 and are mostly talking past each other. And I don't know how to bridge
 that. All the keystone specs I've dived into definitely assume a level
 of understanding of keystone internals and culture that aren't obvious
 from the outside. 
 
 Policy is not currently designed to be additive;  let's take the Nova rule||
 ||
 ||get_network: rule:admin_or_owner or rule:shared or rule:external or
 rule:context_is_advsvc||
 ||
 |FROM
 http://git.openstack.org/cgit/openstack/neutron/tree/etc/policy.json#n27|
 ||
 |This pulls in |
 
 external: field:networks:router:external=True,
 |
 Now, we have a single JSON file that implements this. Lets say that you
 ended up coding exactly this rule in python. What would that mean? 
 Either you make some way of initializing oslo.policy from a Python
 object, or you enforce outside of Oslo.policy (custom nova Code).  If it
 is custom code, you  have to say run oslo or run my logic
 everywhere...you can see that this approach leads to fragementation of
 policy enforcement.
 
 So, instead, you go the initialize oslo from Python.  We currentl have
 the idea of multiple policy files in the directory, so you just treat
 the Python code as a file with either the lowest or highest ABC order,
 depending.  Now, each policy file gets read, and the rules are a
 hashtable, keyed by the rule name.  So both get_network and external are
 keys that get read in.  If 'overwrite' is set, it will only process the
 last set of rules (replaces all rules)  but I think what we want here is
 just update:
 
 http://git.openstack.org/cgit/openstack/oslo.policy/tree/oslo_policy/policy.py#n361
 Which would mix together the existing rules with the rules from the
 policy files. 
 
 
 So...what would your intention be with hardcoding the policy in Nova? 
 That your rule gets overwritten with the rule that comes from the
 centralized policy store, or that you rule gets executed in addition to
 the rule from central?  Neither are going to get you what you want,
 which is Make sure you can't break Nova by changing Policy

It gets overwritten by the central store.

And you are wrong, that gives me what I want, because we can emit a
WARNING in the logs if the patch is something crazy. The operators will
see it, and be able to fix it later.

I'm not trying to prevent people from changing their policy in crazy
ways. I'm trying to build in some safety net where we can detect it's
kind of a bad idea and emit that information a place that Operators can
see and sort out later, instead of pulling their hair out.

But you can only do that if you have encoded what's the default, plus
annotations about ways that changing the default are unwise.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Bradley Jones (bradjone)
I’m really keen to take on this effort, to echo what Steve said I think adding 
a dashboard component to Magnum is critical in adoption and delivering good 
usability to all.

Thanks,
Brad Jones


On 4 Jun 2015, at 18:58, Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com wrote:

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Yee, Guang
I am confused about the goal. Are we saying we should allow operators to modify 
the access policies but then warn them if they do? But if operators *intend* to 
modify the policies in order to fit their compliance/security needs, which is 
likely the case, aren't the warning messages confusing and counterintuitive?


Guang


-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: Thursday, June 04, 2015 10:16 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

On 06/04/2015 01:03 PM, Adam Young wrote:
 On 06/04/2015 09:40 AM, Sean Dague wrote:
 So I feel like I understand the high level dynamic policy end game. I 
 feel like what I'm proposing for policy engine with encoded defaults 
 doesn't negatively impact that. I feel there is a middle chunk where 
 perhaps we've got different concerns or different dragons that we 
 see, and are mostly talking past each other. And I don't know how to 
 bridge that. All the keystone specs I've dived into definitely assume 
 a level of understanding of keystone internals and culture that 
 aren't obvious from the outside.
 
 Policy is not currently designed to be additive;  let's take the Nova 
 rule||
 ||
 ||get_network: rule:admin_or_owner or rule:shared or rule:external 
 ||or
 rule:context_is_advsvc||
 ||
 |FROM
 http://git.openstack.org/cgit/openstack/neutron/tree/etc/policy.json#n
 27|
 ||
 |This pulls in |
 
 external: field:networks:router:external=True,
 |
 Now, we have a single JSON file that implements this. Lets say that 
 you ended up coding exactly this rule in python. What would that mean?
 Either you make some way of initializing oslo.policy from a Python 
 object, or you enforce outside of Oslo.policy (custom nova Code).  If 
 it is custom code, you  have to say run oslo or run my logic
 everywhere...you can see that this approach leads to fragementation of 
 policy enforcement.
 
 So, instead, you go the initialize oslo from Python.  We currentl 
 have the idea of multiple policy files in the directory, so you just 
 treat the Python code as a file with either the lowest or highest ABC 
 order, depending.  Now, each policy file gets read, and the rules are 
 a hashtable, keyed by the rule name.  So both get_network and external 
 are keys that get read in.  If 'overwrite' is set, it will only 
 process the last set of rules (replaces all rules)  but I think what 
 we want here is just update:
 
 http://git.openstack.org/cgit/openstack/oslo.policy/tree/oslo_policy/p
 olicy.py#n361 Which would mix together the existing rules with the 
 rules from the policy files.
 
 
 So...what would your intention be with hardcoding the policy in Nova? 
 That your rule gets overwritten with the rule that comes from the 
 centralized policy store, or that you rule gets executed in addition 
 to the rule from central?  Neither are going to get you what you want, 
 which is Make sure you can't break Nova by changing Policy

It gets overwritten by the central store.

And you are wrong, that gives me what I want, because we can emit a WARNING in 
the logs if the patch is something crazy. The operators will see it, and be 
able to fix it later.

I'm not trying to prevent people from changing their policy in crazy ways. I'm 
trying to build in some safety net where we can detect it's kind of a bad idea 
and emit that information a place that Operators can see and sort out later, 
instead of pulling their hair out.

But you can only do that if you have encoded what's the default, plus 
annotations about ways that changing the default are unwise.

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Steven Dake (stdake)
Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Thai Q Tran
I am interested but not sure how much time I have this release cycle. I can take on a more hands-off approach and help review to make sure that magnum-ui is align with future horizon directions.-"Steven Dake (stdake)" std...@cisco.com wrote: -To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.orgFrom: "Steven Dake (stdake)" std...@cisco.comDate: 06/04/2015 11:03AMSubject: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team
Hey folks,


I think it is critical for self-service needs that we have a Horizon dashboard to represent Magnum. I know the entire Magnum team has no experience in UI development, but I have found atleast one volunteer Bradley Jones to tackle the work.


I am looking for more volunteers to tackle this high impact effort to bring Containers to OpenStack either in the existing Magnum core team or as new contributors.  If your interested, please chime in on this thread.


As far as how to get patches approved, there are two models we can go with.


Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A Magnum infrastructure code. This also preserves us as one team with one mission.


Option #2:
We make a new core team magnum-ui-core. This presents special problems if the UI contributor team isnt large enough to get reviews in. I suspect Option #2 will be difficult to execute.


Cores, please vote on Option #1, or Option #2, and Adrian can make a decision based upon the results.


Regards
-steve
__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Davanum Srinivas
+1 to single team

-- dims

On Thu, Jun 4, 2015 at 2:02 PM, Steven Dake (stdake) std...@cisco.com wrote:
 My vote is +1 for a unified core team for all Magnum development which in
 the future will include the magnum-ui repo, the python-magnumclient repo,
 the magnum repo, and the python-k8sclient repo.

 Regards
 -steve

 From: Steven Dake std...@cisco.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Thursday, June 4, 2015 at 10:58 AM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum -
 need a vote from the core team

 Hey folks,

 I think it is critical for self-service needs that we have a Horizon
 dashboard to represent Magnum.  I know the entire Magnum team has no
 experience in UI development, but I have found atleast one volunteer Bradley
 Jones to tackle the work.

 I am looking for more volunteers to tackle this high impact effort to bring
 Containers to OpenStack either in the existing Magnum core team or as new
 contributors.   If your interested, please chime in on this thread.

 As far as “how to get patches approved”, there are two models we can go
 with.

 Option #1:
 We add these UI folks to the magnum-core team and trust them not to +2/+A
 Magnum infrastructure code.  This also preserves us as one team with one
 mission.

 Option #2:
 We make a new core team magnum-ui-core.  This presents special problems if
 the UI contributor team isn’t large enough to get reviews in.  I suspect
 Option #2 will be difficult to execute.

 Cores, please vote on Option #1, or Option #2, and Adrian can make a
 decision based upon the results.

 Regards
 -steve


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Abishek Subramanian (absubram)
Same here. I’m definitely interested in helping out but not sure how much time 
I can commit to. Will most definitely help out with reviews and other decision 
making process to help ensure magnum-ui is implemented and in the correct 
direction relating to Horizon.

From: Thai Q Tran tqt...@us.ibm.commailto:tqt...@us.ibm.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, June 4, 2015 at 2:07 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

I am interested but not sure how much time I have this release cycle. I can 
take on a more hands-off approach and help review to make sure that magnum-ui 
is align with future horizon directions.

-Steven Dake (stdake) std...@cisco.commailto:std...@cisco.com wrote: 
-
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
From: Steven Dake (stdake) std...@cisco.commailto:std...@cisco.com
Date: 06/04/2015 11:03AM
Subject: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need 
a vote from the core team

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] How to deal with aborted image read?

2015-06-04 Thread Chris Friesen

On 06/04/2015 03:01 AM, Flavio Percoco wrote:

On 03/06/15 16:46 -0600, Chris Friesen wrote:

We recently ran into an issue where nova couldn't write an image file due to
lack of space and so just quit reading from glance.

This caused glance to be stuck with an open file descriptor, which meant that
the image consumed space even after it was deleted.

I have a crude fix for nova at https://review.openstack.org/#/c/188179/;
which basically continues to read the image even though it can't write it.
That seems less than ideal for large images though.

Is there a better way to do this?  Is there a way for nova to indicate to
glance that it's no longer interested in that image and glance can close the
file?

If I've followed this correctly, on the glance side I think the code in
question is ultimately glance_store._drivers.filesystem.ChunkedFile.__iter__().


Actually, to be honest, I was quite confused by the email :P

Correct me if I still didn't understand what you're asking.

You ran out of space on the Nova side while downloading the image and
there's a file descriptor leak somewhere either in that lovely (sarcasm)
glance wrapper or in glanceclient.


The first part is correct, but the file descriptor is actually held by 
glance-api.


Just by reading your email and glancing your patch, I believe the bug
might be in glanceclient but I'd need to five into this. The piece of
code you'll need to look into is[0].

glance_store is just used server side. If that's what you meant -
glance is keeping the request and the ChunkedFile around - then yes,
glance_store is the place to look into.

[0]
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v1/images.py#L152


I believe what's happening is that the ChunkedFile code opens the file and 
creates the iterator.  Nova then starts iterating through the file.


If nova (or any other user of glance) iterates all the way through the file then 
the ChunkedFile code will hit the finally clause in __iter__() and close the 
file descriptor.


If nova starts iterating through the file and then stops (due to running out of 
room, for example), the ChunkedFile.__iter__() routine is left with an open file 
descriptor.  At this point deleting the image will not actually free up any space.


I'm not a glance guy so I could be wrong about the code.  The externally-visible 
data are:

1) glance-api is holding an open file descriptor to a deleted image file
2) If I kill glance-api the disk space is freed up.
3) If I modify nova to always finish iterating through the file the problem 
doesn't occur in the first place.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Prioritize tests over JSCS

2015-06-04 Thread Thai Q Tran
Hi folks,I know a lot of people are tackling the JSCS stuff, and thats really great. But it would be extra nice to see JSCS stuff along with JP's guidelines in your patches. Furthermore, if the file you are working on doesn't have an accompanying spec file, please make sure that the tests for it exists. If it is not there, please prioritize and spend some time reviewing patches with the tests you need, or create a spec file and get that merge first.Thanks


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 agent rescheduling issue

2015-06-04 Thread Assaf Muller


- Original Message -
 One reason for not sending the heartbeat from a separate greenthread could be
 that the agent is already doing it [1].
 The current proposed patch addresses the issue blindly - that is to say
 before declaring an agent dead let's wait for some more time because it
 could be stuck doing stuff. In that case I would probably make the
 multiplier (currently 2x) configurable.
 
 The reason for which state report does not occur is probably that both it and
 the resync procedure are periodic tasks. If I got it right they're both
 executed as eventlet greenthreads but one at a time. Perhaps then adding an
 initial delay to the full sync task might ensure the first thing an agent
 does when it comes up is sending a heartbeat to the server?

There's a patch that is related to this issue:
https://review.openstack.org/#/c/186584/

I made a comment there where, at least to me, it makes a lot of sense to insert
a report_state call in the after_start method, right after the agent initializes
but before it performs the first full sync. So, right here before line 560:
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/agent.py#L560

That should help *some* of the issues discussed in this thread, but not all.

 
 On the other hand, while doing the initial full resync, is the agent able to
 process updates? If not perhaps it makes sense to have it down until it
 finishes synchronisation.
 
 Salvatore
 
 [1]
 http://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/l3/agent.py#n587
 
 On 4 June 2015 at 16:16, Kevin Benton  blak...@gmail.com  wrote:
 
 
 
 
 Why don't we put the agent heartbeat into a separate greenthread on the agent
 so it continues to send updates even when it's busy processing changes?
 On Jun 4, 2015 2:56 AM, Anna Kamyshnikova  akamyshnik...@mirantis.com 
 wrote:
 
 
 
 Hi, neutrons!
 
 Some time ago I discovered a bug for l3 agent rescheduling [1]. When there
 are a lot of resources and agent_down_time is not big enough neutron-server
 starts marking l3 agents as dead. The same issue has been discovered and
 fixed for DHCP-agents. I proposed a change similar to those that were done
 for DHCP-agents. [2]
 
 There is no unified opinion on this bug and proposed change, so I want to ask
 developers whether it worth to continue work on this patch or not.
 
 [1] - https://bugs.launchpad.net/neutron/+bug/1440761
 [2] - https://review.openstack.org/171592
 
 --
 Regards,
 Ann Kamyshnikova
 Mirantis, Inc
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Steven Dake (stdake)
My vote is +1 for a unified core team for all Magnum development which in the 
future will include the magnum-ui repo, the python-magnumclient repo, the 
magnum repo, and the python-k8sclient repo.

Regards
-steve

From: Steven Dake std...@cisco.commailto:std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, June 4, 2015 at 10:58 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need 
a vote from the core team

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Chris Friesen

On 06/04/2015 10:14 AM, Devananda van der Veen wrote:


On Jun 4, 2015 8:57 AM, Monty Taylor mord...@inaugust.com



  So, seriously - let's grow up and start telling people that they do not
  get to pick and choose user-visible feature sets. If they have an unholy
  obsession with a particular backend technology that does not allow a
  public feature of the API to work, then they are deploying a broken
  cloud and they need to fix it.
 

So I just had dinner last night with a very large user of OpenStack (yes, they
exist)  whose single biggest request is that we stop differentiating in the
API. To them, any difference in the usability / behavior / API between OpenStack
deployment X and Y is a serious enough problem that it will have two effects:
- vendor lock in
- they stop using OpenStack
And since avoiding single vendor lock in is important to them, well, really it
has only one result.

Tl;Dr; Monty is right. We MUST NOT vary the API or behaviour significantly or
non-discoverably between clouds. Or we simply won't have users.


If a vendor wants to differentiate themselves, what about having two sets of 
API endpoints?  One that is full vanilla openstack with bog-standard behaviour, 
and one that has vendor-specific stuff in it?


That way the end-users that want interop can just use the standard API and get 
common behaviour across clouds, while the end-users that want the special 
sauce and are willing to lock in to a vendor to get it can use the 
vendor-specific API.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Rebranded Volume Drivers

2015-06-04 Thread Jay S. Bryant



On 06/03/2015 02:53 PM, Eric Harney wrote:

On 06/03/2015 01:59 PM, John Griffith wrote:

On Wed, Jun 3, 2015 at 11:32 AM, Mike Perez thin...@gmail.com wrote:


There are a couple of cases [1][2] I'm seeing where new Cinder volume
drivers for Liberty are rebranding other volume drivers. This involves
inheriting off another volume driver's class(es) and providing some
config options to set the backend name, etc.

Two problems:

1) There is a thought of no CI [3] is needed, since you're using
another vendor's driver code which does have a CI.

2) IMO another way of satisfying a check mark of being OpenStack
supported and disappearing from the community.

What gain does OpenStack get from these kind of drivers?

Discuss.

[1] - https://review.openstack.org/#/c/187853/
[2] - https://review.openstack.org/#/c/187707/4
[3] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers

--
Mike Perez


​This case is interesting​ mostly because it's the same contractor
submitting the driver for all the related platforms.  Frankly I find the
whole rebranding annoying, but there's certainly nothing really wrong with
it, and well... why not, it's Open Source.

What I do find annoying is the lack of give back; so this particular
contributor has submitted a few drivers thus far (SCST, DotHill and some
others IIRC), and now has three more proposed. This would be great except I
personally have spent a very significant amount of time with this person
helping with development, CI and understanding OpenStack and Cinder.

To date, I don't see that he's provided a single code review (good or bad)
or contributed anything back other than to his specific venture.

Anyway... I think your point was for input on the two questions:

For item '1':
I guess as silly as it seems they should probably have 3'rd party CI.
There are firmware differences etc that may actually change behaviors, or
things my diverge, or maybe their code is screwed up and the inheritance
doesn't work (doubtful).

Given that part of the case made for CI was ensure that Cinder ships
drivers that work, the case of backend behavior diverging over time
from what originally worked with Cinder seems like a valid concern.  We
lose the ability to keep tabs on that for derived drivers without CI.


Yes, it's just a business venture in this case (good or bad, not for me to
decide).  The fact is we don't discriminate or place a value on peoples
contributions, and this shouldn't be any different.  I think the best
answer is follow same process for any driver and move on.  This does
point out that maybe OpenStack/Cinder has grown to a point where there are
so many options and choices that it's time to think about changing some of
the policies and ways we do things.

In my opinion, OpenStack doesn't gain much in this particular case, which
brings me back to;
remove all drivers except the ref-impl and have them pip installable and on
a certified list based on CI.

Thanks,
John


The other issue I see with not requiring CI for derived drivers is
that, inevitably, small changes will be made to the driver code, and we
will find ourselves having to sort out how much change can happen before
CI is then required.  I don't know how to define that in a way that
would be useful as a general policy.

Eric

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


At first I wasn't too sure that requiring an additional CI for a derived 
driver made sense.  The concerns raised, however, are compelling and I 
think that a CI should be required for rebranded drivers.  It is 
inevitable that subtle differences will sneak in and we need to have CI 
in place to catch any failures that may also be introduced.


Jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Hongbin Lu
Could we have a new group magnum-ui-core and include magnum-core as a subgroup, 
like the heat-coe-tempalte-core group.

Thanks,
Hongbin

From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: June-04-15 1:58 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need 
a vote from the core team

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as how to get patches approved, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn't large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-04 Thread Vahid S Hashemian
undefined


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-04 Thread Vahid S Hashemian
undefined


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Which middleware does keystone use for authentication and authorization?

2015-06-04 Thread Amy Zhang
Hi all,

Does any one know which middleware does Openstack use to authenticate users
in Keystone in Kilo release?  I saw two middleware, one is in Keystone
client, the other is an independent directory. I know the previous version,
the middleware in keystone client is used, but in Kilo, I am not sure if
they are still using the middleware in keystone client or the other. Anyone
has any idea?

Thanks!

-- 
Best regards,
Amy (Yun Zhang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 agent rescheduling issue

2015-06-04 Thread Kevin Benton
Is there a way to parallelize the period tasks? I wanted to go this route
because I encountered cases where a bunch of routers would get scheduled to
l3 agents and they would all hit the server nearly simultaneously with a
sync routers task.

This could result in thousands of routers and their floating IPs being
retrieved, which would result in tens of thousands of SQL queries. During
this time, the agents would time out and have all their routers
rescheduled, leading to a downward spiral of doom.

I spent a bunch of time optimizing the sync routers calls on the l3 side so
it's hard to trigger this now, but I would be more comfortable if we didn't
depend on sync routers taking less time than the agent down time.

If we can have the heartbeats always running, it should solve both issues.
On Jun 4, 2015 8:56 AM, Salvatore Orlando sorla...@nicira.com wrote:

 One reason for not sending the heartbeat from a separate greenthread could
 be that the agent is already doing it [1].
 The current proposed patch addresses the issue blindly - that is to say
 before declaring an agent dead let's wait for some more time because it
 could be stuck doing stuff. In that case I would probably make the
 multiplier (currently 2x) configurable.

 The reason for which state report does not occur is probably that both it
 and the resync procedure are periodic tasks. If I got it right they're both
 executed as eventlet greenthreads but one at a time. Perhaps then adding an
 initial delay to the full sync task might ensure the first thing an agent
 does when it comes up is sending a heartbeat to the server?

 On the other hand, while doing the initial full resync, is the  agent able
 to process updates? If not perhaps it makes sense to have it down until it
 finishes synchronisation.

 Salvatore

 [1]
 http://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/l3/agent.py#n587

 On 4 June 2015 at 16:16, Kevin Benton blak...@gmail.com wrote:

 Why don't we put the agent heartbeat into a separate greenthread on the
 agent so it continues to send updates even when it's busy processing
 changes?
 On Jun 4, 2015 2:56 AM, Anna Kamyshnikova akamyshnik...@mirantis.com
 wrote:

 Hi, neutrons!

 Some time ago I discovered a bug for l3 agent rescheduling [1]. When
 there are a lot of resources and agent_down_time is not big enough
 neutron-server starts marking l3 agents as dead. The same issue has been
 discovered and fixed for DHCP-agents. I proposed a change similar to those
 that were done for DHCP-agents. [2]

 There is no unified opinion on this bug and proposed change, so I want
 to ask developers whether it worth to continue work on this patch or not.

 [1] - https://bugs.launchpad.net/neutron/+bug/1440761
 [2] - https://review.openstack.org/171592

 --
 Regards,
 Ann Kamyshnikova
 Mirantis, Inc


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] stubs considered harmful in spec tests

2015-06-04 Thread Rich Megginson
Summary - In puppet module spec tests, do not use stubs, which means 
the method will be called 0 or more times.  Instead, use expects, 
which means the method must be called exactly 1 time, or some other more 
fine grained expectation method stubber.


Our puppet unit tests mostly use rspec, but use Mocha 
http://gofreerange.com/mocha/docs/index.html for object mocking and 
method stubbing.


I have already run into several cases where the spec test result is 
misleading because stubs was used instead of expects, and I have 
spent a lot of time trying to figure out why a method was not called, 
because adding an expectation like


  provider.class.stubs(:openstack)
.with('endpoint', 'list', '--quiet', 
'--format', 'csv', [])
.returns('ID,Region,Service Name,Service 
Type,Enabled,Interface,URL

2b38d77363194018b2b9b07d7e6bdc13,RegionOne,keystone,identity,True,admin,http://127.0.0.1:5002/v3;
3097d316c19740b7bc866c5cb2d7998b,RegionOne,keystone,identity,True,internal,http://127.0.0.1:5001/v3;
3445dddcae1b4357888ee2a606ca1585,RegionOne,keystone,identity,True,public,http://127.0.0.1:5000/v3;
')

implies that openstack endpoint list will be called.

If at all possible, we should use an explicit expectation.  For example, 
in the above case, use expects instead:


  provider.class.expects(:openstack)
.with('endpoint', 'list', '--quiet', 
'--format', 'csv', [])
.returns('ID,Region,Service Name,Service 
Type,Enabled,Interface,URL

2b38d77363194018b2b9b07d7e6bdc13,RegionOne,keystone,identity,True,admin,http://127.0.0.1:5002/v3;
3097d316c19740b7bc866c5cb2d7998b,RegionOne,keystone,identity,True,internal,http://127.0.0.1:5001/v3;
3445dddcae1b4357888ee2a606ca1585,RegionOne,keystone,identity,True,public,http://127.0.0.1:5000/v3;
')

This means that openstack endpoint list must be called once, and only 
once.  For odd cases where you want a method to be called some certain 
number of times, or to return different values each time it is called, 
the Expectation class 
http://gofreerange.com/mocha/docs/Mocha/Expectation.html should be used 
to modify the initial expectation.


Unfortunately, I don't think we can just do a blanket 
s/stubs/expects/g in *_spec.rb, without incurring a lot of test 
failures.  So perhaps we don't have to do this right away, but I think 
future code reviews should -1 any spec file that uses stubs without a 
strong justification.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why do we drop branches? (WAS: Re: Targeting icehouse-eol?)

2015-06-04 Thread ZZelle
We can do the opposite to avoid more and more ACLs:

ALLOW push on some specific stable branches

[access refs/heads/stable/kilo]
  push = allow group ***-stable-maint

[access refs/heads/stable/juno]
  push = allow group ***-stable-maint


BLOCK push on others stable branches

[access refs/heads/stable/juno]
  push =  block group Anonymous Users


Cedric/ZZelle@IRC





On Thu, Jun 4, 2015 at 6:15 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-06-04 16:23:12 +0200 (+0200), Ihar Hrachyshka wrote:
  Why do we even drop stable branches? If anything, it introduces
  unneeded problems to those who have their scripts/cookbooks set to
  chase those branches. They would need to switch to eol tag. Why not
  just leaving them sitting there, marked read only?
 
  It becomes especially important now that we say that stable HEAD *is*
  a stable release.

 It's doable, but we'll need ACL changes applied to every project
 participating in this release model to reject new change submissions
 and prevent anyone from approving them on every branch which reaches
 its EOL date. These ACLs will also grow longer and longer over time
 as we need to add new sections for each EOL branch.

 Also, it seems to me like a feature if downstream consumers have
 to take notice and explicitly adjust their tooling to intentionally
 continue deploying a release for which we no longer provide support
 and security updates.
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Adam Young

On 06/04/2015 09:40 AM, Sean Dague wrote:

So I feel like I understand the high level dynamic policy end game. I
feel like what I'm proposing for policy engine with encoded defaults
doesn't negatively impact that. I feel there is a middle chunk where
perhaps we've got different concerns or different dragons that we see,
and are mostly talking past each other. And I don't know how to bridge
that. All the keystone specs I've dived into definitely assume a level
of understanding of keystone internals and culture that aren't obvious
from the outside.


Policy is not currently designed to be additive;  let's take the Nova rule||
||
||get_network: rule:admin_or_owner or rule:shared or rule:external or 
rule:context_is_advsvc||

||
|FROM 
http://git.openstack.org/cgit/openstack/neutron/tree/etc/policy.json#n27|

||
|This pulls in |

external: field:networks:router:external=True,
|
Now, we have a single JSON file that implements this. Lets say that you 
ended up coding exactly this rule in python. What would that mean?  
Either you make some way of initializing oslo.policy from a Python 
object, or you enforce outside of Oslo.policy (custom nova Code).  If it 
is custom code, you  have to say run oslo or run my logic 
everywhere...you can see that this approach leads to fragementation of 
policy enforcement.


So, instead, you go the initialize oslo from Python.  We currentl have 
the idea of multiple policy files in the directory, so you just treat 
the Python code as a file with either the lowest or highest ABC order, 
depending.  Now, each policy file gets read, and the rules are a 
hashtable, keyed by the rule name.  So both get_network and external are 
keys that get read in.  If 'overwrite' is set, it will only process the 
last set of rules (replaces all rules)  but I think what we want here is 
just update:


http://git.openstack.org/cgit/openstack/oslo.policy/tree/oslo_policy/policy.py#n361
Which would mix together the existing rules with the rules from the 
policy files.



So...what would your intention be with hardcoding the policy in Nova?  
That your rule gets overwritten with the rule that comes from the 
centralized policy store, or that you rule gets executed in addition to 
the rule from central?  Neither are going to get you what you want, 
which is Make sure you can't break Nova by changing Policy









|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposing Assaf Muller for the Neutron Core Reviewer Team

2015-06-04 Thread Miguel Lavalle
Congrats! Well deserved

On Thu, Jun 4, 2015 at 8:50 AM, Assaf Muller amul...@redhat.com wrote:

 Thank you.

 We have a lot of work ahead of us :)


 - Original Message -
  It's a been a week since I proposed this, with no objections. Welcome to
 the
  Neutron core reviewer team as the new QA Lieutenant Assaf!
 
  On Tue, Jun 2, 2015 at 12:35 PM, Maru Newby  ma...@redhat.com  wrote:
 
 
  +1 from me, long overdue!
 
 
   On May 28, 2015, at 9:42 AM, Kyle Mestery  mest...@mestery.com 
 wrote:
  
   Folks, I'd like to propose Assaf Muller to be a member of the Neutron
 core
   reviewer team. Assaf has been a long time contributor in Neutron, and
 he's
   also recently become my testing Lieutenant. His influence and
 knowledge in
   testing will be critical to the team in Liberty and beyond. In
 addition to
   that, he's done some fabulous work for Neutron around L3 HA and DVR.
 Assaf
   has become a trusted member of our community. His review stats place
 him
   in the pack with the rest of the Neutron core reviewers.
  
   I'd also like to take this time to remind everyone that reviewing code
 is a
   responsibility, in Neutron the same as other projects. And core
 reviewers
   are especially beholden to this responsibility. I'd also like to point
 out
   that +1/-1 reviews are very useful, and I encourage everyone to
 continue
   reviewing code even if you are not a core reviewer.
  
   Existing Neutron cores, please vote +1/-1 for the addition of Assaf to
 the
   core reviewer team.
  
   Thanks!
   Kyle
  
   [1] http://stackalytics.com/report/contribution/neutron-group/180
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Sean Dague
On 06/04/2015 11:27 AM, Dmitry Tantsur wrote:
 On 06/04/2015 05:03 PM, Sean Dague wrote:
 On 06/04/2015 10:50 AM, Dmitry Tantsur wrote:
 On 06/04/2015 04:40 PM, Ruby Loo wrote:
 Hi,

 In Kilo, we introduced microversions but it seems to be a
 work-in-progress. There is an effort now to add microversion into the
 API-WG's guidelines, to provide a consistent way of using microversions
 across OpenStack projects [1]. Specifically, in the context of this
 email, there is a proposed guideline for when to bump the microversion
 [2].

 As I understand this guideline tells to bump microversion on every
 change which I strongly -1 as usual. Reason: it's bump for the sake of
 bump, without any direct benefit for users (no, API discoverability is
 not one, because microversion do not solve it).

 I'll post the same comment to the guideline.

 Backwards compatible API adds with no user signaling is a fallacy
 because it assumes the arrow of time flows only one way.

 If at version 1.5 you have a resource that is

 foo {
bar: ...
 }

 And then you decide you want to add another attribute

 foo {
bar: ...
baz: ...
 }

 And you don't bump the version, you'll get a set of users that use a
 cloud with baz, and incorrectly assume that version 1.5 of the API means
 that baz will always be there. Except, there are lots of clouds out
 there, including ones that might be at the code commit before it was
 added. Because there are lots of deploys in the world, your users can
 effectively go back in time.

 So now your API definition for version 1.5 is:

 foo, may or may not contain baz, and there is no way of you knowing if
 it will until you try. good luck.

 Which is pretty aweful.
 
 Which is not very different from your definition. Version 1.5 contains
 feature xyz, unless it's disabled by the configuration or patched out
 downstream. Well, 1.4 can also contain the feature, if downstream
 backported it. So good luck again.

The whole point of interop is you can't call it OpenStack if you are
patching it downstream to break the upstream contract. Microversions are
a contract.

Downstream can hack code all they want, it's no longer OpenStack when
they do. If they are ok with it, that's fine. But them taking OpenStack
code and making something that's not OpenStack is beyond the scope of
the problem here. This is about good actors, acting in good faith, to
provide a consistent experience to application writers.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposing Assaf Muller for the Neutron Core Reviewer Team

2015-06-04 Thread Vikram Choudhary
Congrats Assaf ;)

On Thu, Jun 4, 2015 at 9:35 PM, Somanchi Trinath 
trinath.soman...@freescale.com wrote:

  Congratulations Assaf J







 *From:* Jaume Devesa [mailto:devv...@gmail.com]
 *Sent:* Thursday, June 04, 2015 9:25 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron] Proposing Assaf Muller for the
 Neutron Core Reviewer Team



 Congratulations Assaf!!



 On 4 June 2015 at 17:45, Paul Michali p...@michali.net wrote:

  +100 Great addition! Congratulations Assaf!



 On Thu, Jun 4, 2015 at 11:41 AM Miguel Lavalle mig...@mlavalle.com
 wrote:

  Congrats! Well deserved



 On Thu, Jun 4, 2015 at 8:50 AM, Assaf Muller amul...@redhat.com wrote:

 Thank you.

 We have a lot of work ahead of us :)



 - Original Message -
  It's a been a week since I proposed this, with no objections. Welcome to
 the
  Neutron core reviewer team as the new QA Lieutenant Assaf!
 
  On Tue, Jun 2, 2015 at 12:35 PM, Maru Newby  ma...@redhat.com  wrote:
 
 
  +1 from me, long overdue!
 
 
   On May 28, 2015, at 9:42 AM, Kyle Mestery  mest...@mestery.com 
 wrote:
  
   Folks, I'd like to propose Assaf Muller to be a member of the Neutron
 core
   reviewer team. Assaf has been a long time contributor in Neutron, and
 he's
   also recently become my testing Lieutenant. His influence and
 knowledge in
   testing will be critical to the team in Liberty and beyond. In
 addition to
   that, he's done some fabulous work for Neutron around L3 HA and DVR.
 Assaf
   has become a trusted member of our community. His review stats place
 him
   in the pack with the rest of the Neutron core reviewers.
  
   I'd also like to take this time to remind everyone that reviewing code
 is a
   responsibility, in Neutron the same as other projects. And core
 reviewers
   are especially beholden to this responsibility. I'd also like to point
 out
   that +1/-1 reviews are very useful, and I encourage everyone to
 continue
   reviewing code even if you are not a core reviewer.
  
   Existing Neutron cores, please vote +1/-1 for the addition of Assaf to
 the
   core reviewer team.
  
   Thanks!
   Kyle
  
   [1] http://stackalytics.com/report/contribution/neutron-group/180
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 --

 Jaume Devesa

 Software Engineer at Midokura

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][third-party] Common-CI Virtual Sprint

2015-06-04 Thread Asselin, Ramy
Hi,

It was nice to meet many of you at the Vancouver Infra Working Session. Quite a 
bit of progress was made finding owners for some of the common-ci refactoring 
work [1] and puppet testing work. A few patches were proposed, reviewed, and 
some merged!

As we continue the effort over the coming weeks, I thought it would be helpful 
to schedule a  virtual sprint to complete the remaining tasks.

GOAL: By the end of the sprint, we should be able to set up a 3rd party CI 
system using the same puppet components that the OpenStack infrastructure team 
is using in its production CI system.

I proposed this in Tuesday's Infra meeting [1] there was general consensus that 
this would be valuable (if clear goals are documented) and that July 8  9 are 
good dates. (After the US  Canada July 1st and 4th holidays, not on Tuesday, 
and not near a Liberty Milestone)

I would like to get comments from a broader audience on the goals and dates. 

You can show interest by adding your name to the etherpad [3].

Thank you,
Ramy
irc: asselin



[1] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html
[2] 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-06-02-19.01.html
[3] https://etherpad.openstack.org/p/common-ci-sprint



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Adam Young

On 06/04/2015 09:40 AM, Sean Dague wrote:

Is there some secret dragon I'm missing here?


No.  But it is a significant bit of coding to do;  you would need to
crawl every API and make sure you hit every code path that could enforce
policy.

Um, I don't understand that.

I'm saying that you'd GEThttps://my.nova.api.server/policy;
What would that return?  The default policy.json file that you ship?  Or 
would it be auto-generated based on enforcement in the code?


If it is auto-generated, you need to crawl the code, somehow, to 
generate that.


If it is policy.json, then you are not implementing the defaults in 
code, just returning the one managed by the CMS and deployed with the 
Service endpoint.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-04 Thread Vahid S Hashemian
Hi Gosha,

Sorry, not sure why my last message was delivered as undefined.

Anyways, thanks for pointing me to those materials.
I have a feeling though that due to the nature of Heat-Translator we would need 
to deal with HOT based templates and not MuranoPL.

Regards,
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud Labs



-Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: -
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
Date: 06/03/2015 03:26PM
Subject: Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

Hi,

Murano documentation about all internals is here: 
http://murano.readthedocs.org/en/latest/

You probably need to take some example applications from here: 
https://github.com/openstack/murano-apps

Take something simple like Tomcat and PostgresSQL. You will need to have an 
image for Ubuntu/Debian with murano agent. It could be downloaded form here: 
http://apps.openstack.org/#tab=glance-imagesasset=Debian%208%20x64%20(pre-installed%20murano-agent)

Thanks
Gosha

On Wed, Jun 3, 2015 at 2:21 PM, Vahid S Hashemian vahidhashem...@us.ibm.com 
wrote:
Thanks Gosha.

That's right. I have been using HOT based applications. I have not used 
workflows before and need to dig into them.
If you have any pointers on how to go about workflows please share them with me.

Thanks.

Regards,
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud Labs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-04 Thread Vahid S Hashemian
undefined


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Monty Taylor
On 06/04/2015 11:27 AM, Dmitry Tantsur wrote:
 On 06/04/2015 05:03 PM, Sean Dague wrote:
 On 06/04/2015 10:50 AM, Dmitry Tantsur wrote:
 On 06/04/2015 04:40 PM, Ruby Loo wrote:
 Hi,

 In Kilo, we introduced microversions but it seems to be a
 work-in-progress. There is an effort now to add microversion into the
 API-WG's guidelines, to provide a consistent way of using microversions
 across OpenStack projects [1]. Specifically, in the context of this
 email, there is a proposed guideline for when to bump the microversion
 [2].

 As I understand this guideline tells to bump microversion on every
 change which I strongly -1 as usual. Reason: it's bump for the sake of
 bump, without any direct benefit for users (no, API discoverability is
 not one, because microversion do not solve it).

 I'll post the same comment to the guideline.

 Backwards compatible API adds with no user signaling is a fallacy
 because it assumes the arrow of time flows only one way.

 If at version 1.5 you have a resource that is

 foo {
bar: ...
 }

 And then you decide you want to add another attribute

 foo {
bar: ...
baz: ...
 }

 And you don't bump the version, you'll get a set of users that use a
 cloud with baz, and incorrectly assume that version 1.5 of the API means
 that baz will always be there. Except, there are lots of clouds out
 there, including ones that might be at the code commit before it was
 added. Because there are lots of deploys in the world, your users can
 effectively go back in time.

 So now your API definition for version 1.5 is:

 foo, may or may not contain baz, and there is no way of you knowing if
 it will until you try. good luck.

 Which is pretty aweful.
 
 Which is not very different from your definition. Version 1.5 contains
 feature xyz, unless it's disabled by the configuration or patched out
 downstream. Well, 1.4 can also contain the feature, if downstream
 backported it. So good luck again.
 
 If you allow to group features under one microversion, that becomes even
 worse - you can have deployment that got microversion only partially.
 
 For example, that's what I would call API discoverability:
 
  $ ironic has-capability foobar
  true
 
 and that's how it would play with versioning:
 
  $ ironic --ironic-api-version 1.2 has-capability foobar
  false
  $ ironic --ironic-api-version 1.6 has-capability foobar
  true
 
 On the contrary, the only thing that microversion tells me is that the
 server installation is based on a particular upstream commit.
 
 To me these are orthogonal problems, and I believe they should be solved
 differently. Our disagreement is due to seeing them as one problem.

We should stop doing this everywhere in OpenStack. It is the absolute
worst experience ever.

Stop allowing people to disable features with config. there is literally
no user on the face of the planet for whom this is a positive thing.

1.5 should mean that your server has Set(A) of features. 1.6 should mean
Set(A+B) - etc. There should be NO VARIATION and any variation on that
should basically mean that the cloud in question is undeniably broken.

I understand that vendors and operators keep wanting to wank around with
their own narccisitic arrogance to differentiate from one another.

STOP IT

Seriously, it causes me GIANT amount of pain and quite honestly if I
wasn't tied to using OpenStack because I work on it, I would have given
up on it a long time ago because of evil stuff like this.

So, seriously - let's grow up and start telling people that they do not
get to pick and choose user-visible feature sets. If they have an unholy
obsession with a particular backend technology that does not allow a
public feature of the API to work, then they are deploying a broken
cloud and they need to fix it.



 Looking at your comments in the WG repo you also seem to only be
 considering projects shipped at Major release versions (Kilo, Liberty).
 Which might be true of Red Hat's product policy, but it's not generally
 true that all clouds are at a release boundary. Continous Deployment of
 OpenStack has been a value from Day 1, and many public clouds are not
 using releases, but are using arbitrary points off of master.
 
 I don't know why you decided I don't know it, but you're wrong.
 
 A
 microversion describes when a changes happens so that applications
 writers have a very firm contract about what they are talking to.
 
 No, they don't. Too many things can modify behavior - see above.
 

 -Sean

 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [neutron] Proposing Assaf Muller for the Neutron Core Reviewer Team

2015-06-04 Thread Somanchi Trinath
Congratulations Assaf ☺



From: Jaume Devesa [mailto:devv...@gmail.com]
Sent: Thursday, June 04, 2015 9:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Proposing Assaf Muller for the Neutron 
Core Reviewer Team

Congratulations Assaf!!

On 4 June 2015 at 17:45, Paul Michali 
p...@michali.netmailto:p...@michali.net wrote:
+100 Great addition! Congratulations Assaf!

On Thu, Jun 4, 2015 at 11:41 AM Miguel Lavalle 
mig...@mlavalle.commailto:mig...@mlavalle.com wrote:
Congrats! Well deserved

On Thu, Jun 4, 2015 at 8:50 AM, Assaf Muller 
amul...@redhat.commailto:amul...@redhat.com wrote:
Thank you.

We have a lot of work ahead of us :)


- Original Message -
 It's a been a week since I proposed this, with no objections. Welcome to the
 Neutron core reviewer team as the new QA Lieutenant Assaf!

 On Tue, Jun 2, 2015 at 12:35 PM, Maru Newby  
 ma...@redhat.commailto:ma...@redhat.com  wrote:


 +1 from me, long overdue!


  On May 28, 2015, at 9:42 AM, Kyle Mestery  
  mest...@mestery.commailto:mest...@mestery.com  wrote:
 
  Folks, I'd like to propose Assaf Muller to be a member of the Neutron core
  reviewer team. Assaf has been a long time contributor in Neutron, and he's
  also recently become my testing Lieutenant. His influence and knowledge in
  testing will be critical to the team in Liberty and beyond. In addition to
  that, he's done some fabulous work for Neutron around L3 HA and DVR. Assaf
  has become a trusted member of our community. His review stats place him
  in the pack with the rest of the Neutron core reviewers.
 
  I'd also like to take this time to remind everyone that reviewing code is a
  responsibility, in Neutron the same as other projects. And core reviewers
  are especially beholden to this responsibility. I'd also like to point out
  that +1/-1 reviews are very useful, and I encourage everyone to continue
  reviewing code even if you are not a core reviewer.
 
  Existing Neutron cores, please vote +1/-1 for the addition of Assaf to the
  core reviewer team.
 
  Thanks!
  Kyle
 
  [1] http://stackalytics.com/report/contribution/neutron-group/180
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: 
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Jaume Devesa
Software Engineer at Midokura
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which middleware does keystone use for authentication and authorization?

2015-06-04 Thread Steve Martinelli
You are referring to keystonemiddleware (
https://github.com/openstack/keystonemiddleware) - the Keystone team 
switched over many OpenStack services to use this new library in Kilo. It 
was originally a copy of the code in keystoneclient, the driving factor 
for the change was to decouple the middleware from keystoneclient. 

Thanks,

Steve Martinelli
OpenStack Keystone Core

Amy Zhang amy.u.zh...@gmail.com wrote on 06/04/2015 12:04:54 PM:

 From: Amy Zhang amy.u.zh...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 06/04/2015 12:05 PM
 Subject: [openstack-dev] Which middleware does keystone use for 
 authentication and authorization?
 
 Hi all,
 
 Does any one know which middleware does Openstack use to 
 authenticate users in Keystone in Kilo release?  I saw two 
 middleware, one is in Keystone client, the other is an independent 
 directory. I know the previous version, the middleware in keystone 
 client is used, but in Kilo, I am not sure if they are still using 
 the middleware in keystone client or the other. Anyone has any idea?
 
 Thanks!
 
 -- 
 Best regards,
 Amy (Yun Zhang)
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposing Assaf Muller for the Neutron Core Reviewer Team

2015-06-04 Thread Paul Michali
+100 Great addition! Congratulations Assaf!

On Thu, Jun 4, 2015 at 11:41 AM Miguel Lavalle mig...@mlavalle.com wrote:

 Congrats! Well deserved

 On Thu, Jun 4, 2015 at 8:50 AM, Assaf Muller amul...@redhat.com wrote:

 Thank you.

 We have a lot of work ahead of us :)


 - Original Message -
  It's a been a week since I proposed this, with no objections. Welcome
 to the
  Neutron core reviewer team as the new QA Lieutenant Assaf!
 
  On Tue, Jun 2, 2015 at 12:35 PM, Maru Newby  ma...@redhat.com  wrote:
 
 
  +1 from me, long overdue!
 
 
   On May 28, 2015, at 9:42 AM, Kyle Mestery  mest...@mestery.com 
 wrote:
  
   Folks, I'd like to propose Assaf Muller to be a member of the Neutron
 core
   reviewer team. Assaf has been a long time contributor in Neutron, and
 he's
   also recently become my testing Lieutenant. His influence and
 knowledge in
   testing will be critical to the team in Liberty and beyond. In
 addition to
   that, he's done some fabulous work for Neutron around L3 HA and DVR.
 Assaf
   has become a trusted member of our community. His review stats place
 him
   in the pack with the rest of the Neutron core reviewers.
  
   I'd also like to take this time to remind everyone that reviewing
 code is a
   responsibility, in Neutron the same as other projects. And core
 reviewers
   are especially beholden to this responsibility. I'd also like to
 point out
   that +1/-1 reviews are very useful, and I encourage everyone to
 continue
   reviewing code even if you are not a core reviewer.
  
   Existing Neutron cores, please vote +1/-1 for the addition of Assaf
 to the
   core reviewer team.
  
   Thanks!
   Kyle
  
   [1] http://stackalytics.com/report/contribution/neutron-group/180
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Getting `ValueError: Field `volume_id' cannot be None`

2015-06-04 Thread Thang Pham
The problem is in your test case.  There is no such methods as
remotefs.db.snapshot_get or remotefs.db.snapshot_admin_metadata_get.
You need to use with mock.patch('cinder.db.snapshot_get') as snapshot_get,
mock.patch('cinder.db.snapshot_admin_metadata_get')
as snapshot_admin_metadata_get.  These incorrect calls somehow created a
side effect in the other test cases.  I updated you patch with what is
correct, so you should follow it for you other tests.  Your test case needs
a lot more work, I just edited it to just have it pass the unit tests.

Thang

On Thu, Jun 4, 2015 at 4:36 AM, Deepak Shetty dpkshe...@gmail.com wrote:

 I was able to narrow down to the scenario where it fails only when i do:

 ./run_tests.sh -N cinder.tests.unit.test_remotefs
 cinder.tests.unit.test_volume.VolumeTestCase

 and fails with:
 {0}
 cinder.tests.unit.test_volume.VolumeTestCase.test_can_delete_errored_snapshot
 [0.507361s] ... FAILED

 Captured traceback:
 ~~~
 Traceback (most recent call last):
   File cinder/tests/unit/test_volume.py, line 3029, in
 test_can_delete_errored_snapshot
 snapshot_obj = objects.Snapshot.get_by_id(self.context,
 snapshot_id)
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 169,
 in wrapper
 result = fn(cls, context, *args, **kwargs)
   File cinder/objects/snapshot.py, line 130, in get_by_id
 expected_attrs=['metadata'])
   File cinder/objects/snapshot.py, line 112, in _from_db_object
 snapshot[name] = value
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 691,
 in __setitem__
 setattr(self, name, value)
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 70,
 in setter
 field_value = field.coerce(self, name, value)
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py, line
 183, in coerce
 return self._null(obj, attr)
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py, line
 161, in _null
 raise ValueError(_(Field `%s' cannot be None) % attr)
 ValueError: Field `volume_id' cannot be None

 Both the testsuites run fine when i run them individually, as in the below
 is success:

 ./run_tests.sh -N cinder.tests.unit.test_remotefs - no errors

 ./run_tests.sh -N cinder.tests.unit.test_volume.VolumeTestCase - no errors

 So i modified my patch @ https://review.openstack.org/#/c/172808/ (Patch
 set 6) and
 removed all testcase i added in test_remotefs.py except one, so that we
 have lesser code to debug/deal with!

 See
 https://review.openstack.org/#/c/172808/6/cinder/tests/unit/test_remotefs.py

 Now when i disable test_create_snapshot_online_success then running both
 the suites work,
 but when i enable test_create_snapshot_online_success then it fails as
 above.

 I am unable to figure whats the connection between 
 test_create_snapshot_online_success
 in test_remotefs.py
 and VolumeTestCase.test_can_delete_errored_snapshot in test_volume.py
 failure

 Can someone help here ?

 thanx,
 deepak



 On Thu, Jun 4, 2015 at 1:37 PM, Deepak Shetty dpkshe...@gmail.com wrote:

 Hi Thang,
   Since you are working on Snapshot Objects, any idea on why the testcase
 when run all by itself, works, but when run as part of the overall suite,
 fails ?
 This seems to be related to the Snapshot Objects, hence Ccing you.

 On Wed, Jun 3, 2015 at 9:54 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:

 Hi All,
   I am hitting a strange issue when running Cinder unit tests against my
 patch @
 https://review.openstack.org/#/c/172808/5

 I have spent 1 day and haven't been successfull at figuring how/why my
 patch is causing it!

 All tests failing are part of VolumeTestCase suite and from the error
 (see below) it seems
 the Snapshot Object is complaining that 'volume_id' field is null (while
 it shouldn't be)

 An example error from the associated Jenkins run can be seen @

 http://logs.openstack.org/08/172808/5/check/gate-cinder-python27/0abd15e/console.html.gz#_2015-05-22_13_28_47_140

 I am seeing a total of 21 such errors.

 Its strange because, when I try to reproduce it locally in my devstack
 env, I see the below:

 1) When i just run: ./run_tests.sh -N cinder.tests.unit.test_volume.
 VolumeTestCase
 all testcases pass

 2) When i run 1 individual testcase: ./run_tests.sh -N
 cinder.tests.unit.test_volume.VolumeTestCase.test_delete_busy_snapshot
 that passes too

 3) When i run : ./run_tests.sh -N
 I see 21 tests failing and all are failing with error similar to the
 below

 {0} cinder.tests.unit.test_volume.VolumeTestCase.test_delete_busy_snapshot
 [0.537366s] ... FAILED

 Captured traceback:
 ~~~
 Traceback (most recent call last):
   File cinder/tests/unit/test_volume.py, line 3219, in
 test_delete_busy_snapshot
 snapshot_obj = objects.Snapshot.get_by_id(self.context,
 snapshot_id)
   File 

Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion guideline in API-WG

2015-06-04 Thread Devananda van der Veen
On Jun 4, 2015 12:00 AM, Xu, Hejie hejie...@intel.com wrote:

 Hi, guys,

 I’m working on adding Microversion into the API-WG’s guideline which make
sure we have consistent Microversion behavior in the API for user.
 The Nova and Ironic already have Microversion implementation, and as I
know Magnum https://review.openstack.org/#/c/184975/ is going to implement
Microversion also.

 Hope all the projects which support( or plan to) Microversion can join
the review of guideline.

 The Mircoversion specification(this almost copy from nova-specs):
https://review.openstack.org/#/c/187112
 And another guideline for when we should bump Mircoversion
https://review.openstack.org/#/c/187896/

 As I know, there already have a little different between Nova and
Ironic’s implementation. Ironic return min/max version when the requested
 version doesn’t support in server by http-headers. There isn’t such thing
in nova. But that is something for version negotiation we need for nova
also.
 Sean have pointed out we should use response body instead of http
headers, the body can includes error message. Really hope ironic team can
take a
 look at if you guys have compelling reason for using http headers.

 And if we think return body instead of http headers, we probably need
think about back-compatible also. Because Microversion itself isn’t
versioned.
 So I think we should keep those header for a while, does make sense?

 Hope we have good guideline for Microversion, because we only can change
Mircoversion itself by back-compatible way.

Ironic returns the min/max/current API version in the http headers for
every request.

Why would it return this information in a header on success and in the body
on failure? (How would this inconsistency benefit users?)

To be clear, I'm not opposed to *also* having a useful error message in the
body, but while writing the client side of api versioning, parsing the
range consistently from the response header is, IMO, better than requiring
a conditional.

-Deva
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Tim Hinrichs
Inline.

On Thu, Jun 4, 2015 at 6:40 AM, Sean Dague s...@dague.net wrote:

 On 06/04/2015 08:52 AM, Adam Young wrote:
  On 06/04/2015 06:32 AM, Sean Dague wrote:
  On 06/03/2015 08:40 PM, Tim Hinrichs wrote:
  As long as there's some way to get the *declarative* policy from the
  system (as a data file or as an API call) that sounds fine.  But I'm
  dubious that it will be easy to keep the API call that returns the
  declarative policy in sync with the actual code that implements that
  policy.
  Um... why? Nova (or any other server project) needs to know what the
  currently computed policy is to actually enforce it internally. Turning
  around and spitting that back out on the wire is pretty straight
 forward.
 
  Is there some secret dragon I'm missing here?
 
  No.  But it is a significant bit of coding to do;  you would need to
  crawl every API and make sure you hit every code path that could enforce
  policy.

 Um, I don't understand that.

 I'm saying that you'd GET https://my.nova.api.server/policy;

 And it would return basically policy.json. There is no crawling every
 bit, this is a standard entry point to return a policy representation.
 Getting all services to implement this would mean that Keystone could
 support interesting policy things with arbitrary projects, not just a
 small curated list, which is going to be really important in a big tent
 world. Monasca and  Murano are just as important to support here as Nova
 and Swift.



Definitely agree it'd be great to have an API call that returns policy.
The question that I think Adam and I are trying to answer is how do
projects implement that call?  We've (perhaps implicitly) suggested 3
different options.

1. Have a data file called say 'default_policy.json' that the oslo-policy
engine knows how to use (and override with policy.json or whatever).  The
policy-API call that returns policy then just reads in this file and
returns it.

2. Hard-code the return value of the Python function that implements the
policy-API call.  Different options as to how to do this.

3. Write code that automatically generates the policy-API result by
analyzing the code that implements the rest of the API calls (like
create_vm, delete_vm) and extracting the policy that they implement.  This
would require hitting all code paths that implement policy, etc.

I'm guessing you had option (2) in mind.  Is that right?  Assuming that's
the case I see two possibilities.

a. The policy-API call is used internally by Nova to check that an API call
is permitted before executing it.  (I'm talking conceptually.  Obviously
you'd not go through http.)

b. The policy-API call is never used internally; rather, each of the other
API calls (like create-server, delete-server) just use arbitrary Python
logic to decide whether an API call is permitted or not.  This requires the
policy-API call implementation to be kept in sync manually with the other
API calls to ensure the policy-API call returns the actual policy.

I'd be happy with (a) and doubt the practicality of (b).

Tim




 However, I've contemplated doing something like that with
  oslo.policy already;  run a workload through a server with policy
  non-enforcing (Permissive mode) and log the output to a file, then use
  that output to modify either the policy or the delegations (role
  assignments or trusts) used in a workflow.
 
  The Hard coded defaults worry me, though.  Nova is one piece (a big one,
  admittedly) of a delicate dance across multiple (not-so-micro) services
  that make up OpenStack.  Other serivces are going to take their cue from
  what Nova does, and that would make the overall flow that much harder to
  maintain.

 I don't understand why having hard coded defaults makes things harder,
 as long as they are discoverable. Defaults typically make things easier,
 because people then only change what they need, instead of setting a
 value for everything, having the deployment code update, and making
 their policy miss an important thing, or make something wrong because
 they didn't update it correctly at the same time as code.

  I think we need to break some very ingrained patterns in out policy
  enforcement.  I would worry that enforcing policy in code would give us
  something that we could not work around.  Instead, I think we need to
  ensure that the  Nova team leads the rest of the OpenStack core services
  in setting up best practices, and that is primarily a communication
  issue.  Getting to a common understanding of RBAC, and making it clear
  how roles are modified on a per-api basis will make Nova more robust.

 So I feel like I understand the high level dynamic policy end game. I
 feel like what I'm proposing for policy engine with encoded defaults
 doesn't negatively impact that. I feel there is a middle chunk where
 perhaps we've got different concerns or different dragons that we see,
 and are mostly talking past each other. And I don't know how to bridge
 that. All the keystone specs I've 

Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-04 Thread Vahid S Hashemian
Hi Serg,Sorry, I seem to be having issues sending messages to the mailing list.Thanks for your message. I can work on the blueprint spec. Just trying to get a good picture of related Murano processes and where the connection points to Heat-Translator should be.And I agreed with your comment on MuranoPL. I think for TOSCA support and integration with Heat-Translator we need to consider HOT based packages.Regards,-Vahid Hashemian, Ph.D.Advisory Software Engineer, IBM Cloud Labs-Serg Melikyan smelik...@mirantis.com wrote: -To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.orgFrom: Serg Melikyan smelik...@mirantis.comDate: 06/04/2015 06:31AMSubject: Re: [openstack-dev] [Murano] Help needed with TOSCA support in	MuranoHi Vahid,Your analysis is correct, and integration of heat-translator is assimple as you described that in your document. It would be reallyawesome if you would turn this PDF to the proper specification for theblueprint.P.S. Regarding several stack for applications - currently HOT-basedpackages create stack per application, and we don't support same levelof composition as we have in murano-pl based packages. This is anotherquestion for improvement.On Wed, Jun 3, 2015 at 12:44 AM, Georgy Okrokvertskhovgokrokvertsk...@mirantis.com wrote: Hi Vahid, Thank you for sharing your thoughts. I have a questions about application life-cycle if we use TOSCA translator. In Murano the main advantage of using HOT format is that we can update Het stack with resources as soon as we need to deploy additional application. We can dynamically create multi-tier applications with using other apps as a building blocks. Imagine Java app on tom of Tomcat (VM1) and PostgreDB (VM2). All three components are three different apps in the catalog. Murano allows you to bring them and deploy together. Do you think it will be possible to use TOSCA translator for Heat stack updates? What we will do if we have two apps with two TOSCA templates like Tomcat and Postgre. How we can combine them together? Thanks Gosha On Tue, Jun 2, 2015 at 12:14 PM, Vahid S Hashemian vahidhashem...@us.ibm.com wrote: This is my what I have so far. Would love to hear feedback on it. Thanks. Regards, - Vahid Hashemian, Ph.D. Advisory Software Engineer, IBM Cloud Labs __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev-- Serg Melikyan, Senior Software Engineer at Mirantis, Inc.http://mirantis.com| smelik...@mirantis.com__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-04 Thread Vahid S Hashemian
undefined


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposing Assaf Muller for the Neutron Core Reviewer Team

2015-06-04 Thread Jaume Devesa
Congratulations Assaf!!

On 4 June 2015 at 17:45, Paul Michali p...@michali.net wrote:

 +100 Great addition! Congratulations Assaf!

 On Thu, Jun 4, 2015 at 11:41 AM Miguel Lavalle mig...@mlavalle.com
 wrote:

 Congrats! Well deserved

 On Thu, Jun 4, 2015 at 8:50 AM, Assaf Muller amul...@redhat.com wrote:

 Thank you.

 We have a lot of work ahead of us :)


 - Original Message -
  It's a been a week since I proposed this, with no objections. Welcome
 to the
  Neutron core reviewer team as the new QA Lieutenant Assaf!
 
  On Tue, Jun 2, 2015 at 12:35 PM, Maru Newby  ma...@redhat.com 
 wrote:
 
 
  +1 from me, long overdue!
 
 
   On May 28, 2015, at 9:42 AM, Kyle Mestery  mest...@mestery.com 
 wrote:
  
   Folks, I'd like to propose Assaf Muller to be a member of the
 Neutron core
   reviewer team. Assaf has been a long time contributor in Neutron,
 and he's
   also recently become my testing Lieutenant. His influence and
 knowledge in
   testing will be critical to the team in Liberty and beyond. In
 addition to
   that, he's done some fabulous work for Neutron around L3 HA and DVR.
 Assaf
   has become a trusted member of our community. His review stats place
 him
   in the pack with the rest of the Neutron core reviewers.
  
   I'd also like to take this time to remind everyone that reviewing
 code is a
   responsibility, in Neutron the same as other projects. And core
 reviewers
   are especially beholden to this responsibility. I'd also like to
 point out
   that +1/-1 reviews are very useful, and I encourage everyone to
 continue
   reviewing code even if you are not a core reviewer.
  
   Existing Neutron cores, please vote +1/-1 for the addition of Assaf
 to the
   core reviewer team.
  
   Thanks!
   Kyle
  
   [1] http://stackalytics.com/report/contribution/neutron-group/180
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Jaume Devesa
Software Engineer at Midokura
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-04 Thread Vahid S Hashemian
Hi Serg,

Sorry, I seem to be having issues sending messages to the mailing list.

Thanks for your message. I can work on the blueprint spec. Just trying to get a 
good picture of related Murano processes and where the connection points to 
Heat-Translator should be.
And I agreed with your comment on MuranoPL. I think for TOSCA support and 
integration with Heat-Translator we need to consider HOT based packages.

Regards,
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud Labs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Dmitry Tantsur

On 06/04/2015 05:43 PM, Sean Dague wrote:

On 06/04/2015 11:27 AM, Dmitry Tantsur wrote:

On 06/04/2015 05:03 PM, Sean Dague wrote:

On 06/04/2015 10:50 AM, Dmitry Tantsur wrote:

On 06/04/2015 04:40 PM, Ruby Loo wrote:

Hi,

In Kilo, we introduced microversions but it seems to be a
work-in-progress. There is an effort now to add microversion into the
API-WG's guidelines, to provide a consistent way of using microversions
across OpenStack projects [1]. Specifically, in the context of this
email, there is a proposed guideline for when to bump the microversion
[2].


As I understand this guideline tells to bump microversion on every
change which I strongly -1 as usual. Reason: it's bump for the sake of
bump, without any direct benefit for users (no, API discoverability is
not one, because microversion do not solve it).

I'll post the same comment to the guideline.


Backwards compatible API adds with no user signaling is a fallacy
because it assumes the arrow of time flows only one way.

If at version 1.5 you have a resource that is

foo {
bar: ...
}

And then you decide you want to add another attribute

foo {
bar: ...
baz: ...
}

And you don't bump the version, you'll get a set of users that use a
cloud with baz, and incorrectly assume that version 1.5 of the API means
that baz will always be there. Except, there are lots of clouds out
there, including ones that might be at the code commit before it was
added. Because there are lots of deploys in the world, your users can
effectively go back in time.

So now your API definition for version 1.5 is:

foo, may or may not contain baz, and there is no way of you knowing if
it will until you try. good luck.

Which is pretty aweful.


Which is not very different from your definition. Version 1.5 contains
feature xyz, unless it's disabled by the configuration or patched out
downstream. Well, 1.4 can also contain the feature, if downstream
backported it. So good luck again.


The whole point of interop is you can't call it OpenStack if you are
patching it downstream to break the upstream contract. Microversions are
a contract.

Downstream can hack code all they want, it's no longer OpenStack when
they do. If they are ok with it, that's fine. But them taking OpenStack
code and making something that's not OpenStack is beyond the scope of
the problem here. This is about good actors, acting in good faith, to
provide a consistent experience to application writers.


I disagree with all said above, but putting aside discussion about ideal 
vs real world, my point actually boils down to:
if you want feature discovery (or as you call it - contract), make it 
explicit. Create API for it. Here you're upset with users guessing 
features - and you invent one more way to guess them. Probably working 
in ideal world, but still a guessing game. And pretty inconvenient to 
use (to test, to document), I would say.


And within Ironic community (including people deploying from master), 
I'm still waiting to hear requests for such feature at all, but that's 
another question.




-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Devananda van der Veen
On Jun 4, 2015 8:57 AM, Monty Taylor mord...@inaugust.com wrote:

 On 06/04/2015 11:27 AM, Dmitry Tantsur wrote:
  On 06/04/2015 05:03 PM, Sean Dague wrote:
  On 06/04/2015 10:50 AM, Dmitry Tantsur wrote:
  On 06/04/2015 04:40 PM, Ruby Loo wrote:
  Hi,
 
  In Kilo, we introduced microversions but it seems to be a
  work-in-progress. There is an effort now to add microversion into the
  API-WG's guidelines, to provide a consistent way of using
microversions
  across OpenStack projects [1]. Specifically, in the context of this
  email, there is a proposed guideline for when to bump the
microversion
  [2].
 
  As I understand this guideline tells to bump microversion on every
  change which I strongly -1 as usual. Reason: it's bump for the sake of
  bump, without any direct benefit for users (no, API discoverability is
  not one, because microversion do not solve it).
 
  I'll post the same comment to the guideline.
 
  Backwards compatible API adds with no user signaling is a fallacy
  because it assumes the arrow of time flows only one way.
 
  If at version 1.5 you have a resource that is
 
  foo {
 bar: ...
  }
 
  And then you decide you want to add another attribute
 
  foo {
 bar: ...
 baz: ...
  }
 
  And you don't bump the version, you'll get a set of users that use a
  cloud with baz, and incorrectly assume that version 1.5 of the API
means
  that baz will always be there. Except, there are lots of clouds out
  there, including ones that might be at the code commit before it was
  added. Because there are lots of deploys in the world, your users can
  effectively go back in time.
 
  So now your API definition for version 1.5 is:
 
  foo, may or may not contain baz, and there is no way of you knowing if
  it will until you try. good luck.
 
  Which is pretty aweful.
 
  Which is not very different from your definition. Version 1.5 contains
  feature xyz, unless it's disabled by the configuration or patched out
  downstream. Well, 1.4 can also contain the feature, if downstream
  backported it. So good luck again.
 
  If you allow to group features under one microversion, that becomes even
  worse - you can have deployment that got microversion only partially.
 
  For example, that's what I would call API discoverability:
 
   $ ironic has-capability foobar
   true
 
  and that's how it would play with versioning:
 
   $ ironic --ironic-api-version 1.2 has-capability foobar
   false
   $ ironic --ironic-api-version 1.6 has-capability foobar
   true
 
  On the contrary, the only thing that microversion tells me is that the
  server installation is based on a particular upstream commit.
 
  To me these are orthogonal problems, and I believe they should be solved
  differently. Our disagreement is due to seeing them as one problem.

 We should stop doing this everywhere in OpenStack. It is the absolute
 worst experience ever.

 Stop allowing people to disable features with config. there is literally
 no user on the face of the planet for whom this is a positive thing.

 1.5 should mean that your server has Set(A) of features. 1.6 should mean
 Set(A+B) - etc. There should be NO VARIATION and any variation on that
 should basically mean that the cloud in question is undeniably broken.

 I understand that vendors and operators keep wanting to wank around with
 their own narccisitic arrogance to differentiate from one another.

 STOP IT

 Seriously, it causes me GIANT amount of pain and quite honestly if I
 wasn't tied to using OpenStack because I work on it, I would have given
 up on it a long time ago because of evil stuff like this.

 So, seriously - let's grow up and start telling people that they do not
 get to pick and choose user-visible feature sets. If they have an unholy
 obsession with a particular backend technology that does not allow a
 public feature of the API to work, then they are deploying a broken
 cloud and they need to fix it.


So I just had dinner last night with a very large user of OpenStack (yes,
they exist)  whose single biggest request is that we stop differentiating
in the API. To them, any difference in the usability / behavior / API
between OpenStack deployment X and Y is a serious enough problem that it
will have two effects:
- vendor lock in
- they stop using OpenStack
And since avoiding single vendor lock in is important to them, well, really
it has only one result.

Tl;Dr; Monty is right. We MUST NOT vary the API or behaviour significantly
or non-discoverably between clouds. Or we simply won't have users.

 
  Looking at your comments in the WG repo you also seem to only be
  considering projects shipped at Major release versions (Kilo, Liberty).
  Which might be true of Red Hat's product policy, but it's not generally
  true that all clouds are at a release boundary. Continous Deployment of
  OpenStack has been a value from Day 1, and many public clouds are not
  using releases, but are using arbitrary points off of master.
 
  I don't know why you decided I 

Re: [openstack-dev] Why do we drop branches? (WAS: Re: Targeting icehouse-eol?)

2015-06-04 Thread Jeremy Stanley
On 2015-06-04 16:23:12 +0200 (+0200), Ihar Hrachyshka wrote:
 Why do we even drop stable branches? If anything, it introduces
 unneeded problems to those who have their scripts/cookbooks set to
 chase those branches. They would need to switch to eol tag. Why not
 just leaving them sitting there, marked read only?
 
 It becomes especially important now that we say that stable HEAD *is*
 a stable release.

It's doable, but we'll need ACL changes applied to every project
participating in this release model to reject new change submissions
and prevent anyone from approving them on every branch which reaches
its EOL date. These ACLs will also grow longer and longer over time
as we need to add new sections for each EOL branch.

Also, it seems to me like a feature if downstream consumers have
to take notice and explicitly adjust their tooling to intentionally
continue deploying a release for which we no longer provide support
and security updates.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Dmitry Tantsur
So we should have told the folks to so developing agent? (Not sure why you
all think I'm talking about us) Maybe.

But anyway anyone deliberately ignores my points, so I'm done with this
discussion.
04 июня 2015 г. 18:17 пользователь Devananda van der Veen 
devananda@gmail.com написал:


 On Jun 4, 2015 8:57 AM, Monty Taylor mord...@inaugust.com wrote:
 
  On 06/04/2015 11:27 AM, Dmitry Tantsur wrote:
   On 06/04/2015 05:03 PM, Sean Dague wrote:
   On 06/04/2015 10:50 AM, Dmitry Tantsur wrote:
   On 06/04/2015 04:40 PM, Ruby Loo wrote:
   Hi,
  
   In Kilo, we introduced microversions but it seems to be a
   work-in-progress. There is an effort now to add microversion into
 the
   API-WG's guidelines, to provide a consistent way of using
 microversions
   across OpenStack projects [1]. Specifically, in the context of this
   email, there is a proposed guideline for when to bump the
 microversion
   [2].
  
   As I understand this guideline tells to bump microversion on every
   change which I strongly -1 as usual. Reason: it's bump for the sake
 of
   bump, without any direct benefit for users (no, API discoverability
 is
   not one, because microversion do not solve it).
  
   I'll post the same comment to the guideline.
  
   Backwards compatible API adds with no user signaling is a fallacy
   because it assumes the arrow of time flows only one way.
  
   If at version 1.5 you have a resource that is
  
   foo {
  bar: ...
   }
  
   And then you decide you want to add another attribute
  
   foo {
  bar: ...
  baz: ...
   }
  
   And you don't bump the version, you'll get a set of users that use a
   cloud with baz, and incorrectly assume that version 1.5 of the API
 means
   that baz will always be there. Except, there are lots of clouds out
   there, including ones that might be at the code commit before it was
   added. Because there are lots of deploys in the world, your users can
   effectively go back in time.
  
   So now your API definition for version 1.5 is:
  
   foo, may or may not contain baz, and there is no way of you knowing
 if
   it will until you try. good luck.
  
   Which is pretty aweful.
  
   Which is not very different from your definition. Version 1.5 contains
   feature xyz, unless it's disabled by the configuration or patched out
   downstream. Well, 1.4 can also contain the feature, if downstream
   backported it. So good luck again.
  
   If you allow to group features under one microversion, that becomes
 even
   worse - you can have deployment that got microversion only partially.
  
   For example, that's what I would call API discoverability:
  
$ ironic has-capability foobar
true
  
   and that's how it would play with versioning:
  
$ ironic --ironic-api-version 1.2 has-capability foobar
false
$ ironic --ironic-api-version 1.6 has-capability foobar
true
  
   On the contrary, the only thing that microversion tells me is that the
   server installation is based on a particular upstream commit.
  
   To me these are orthogonal problems, and I believe they should be
 solved
   differently. Our disagreement is due to seeing them as one problem.
 
  We should stop doing this everywhere in OpenStack. It is the absolute
  worst experience ever.
 
  Stop allowing people to disable features with config. there is literally
  no user on the face of the planet for whom this is a positive thing.
 
  1.5 should mean that your server has Set(A) of features. 1.6 should mean
  Set(A+B) - etc. There should be NO VARIATION and any variation on that
  should basically mean that the cloud in question is undeniably broken.
 
  I understand that vendors and operators keep wanting to wank around with
  their own narccisitic arrogance to differentiate from one another.
 
  STOP IT
 
  Seriously, it causes me GIANT amount of pain and quite honestly if I
  wasn't tied to using OpenStack because I work on it, I would have given
  up on it a long time ago because of evil stuff like this.
 
  So, seriously - let's grow up and start telling people that they do not
  get to pick and choose user-visible feature sets. If they have an unholy
  obsession with a particular backend technology that does not allow a
  public feature of the API to work, then they are deploying a broken
  cloud and they need to fix it.
 

 So I just had dinner last night with a very large user of OpenStack (yes,
 they exist)  whose single biggest request is that we stop differentiating
 in the API. To them, any difference in the usability / behavior / API
 between OpenStack deployment X and Y is a serious enough problem that it
 will have two effects:
 - vendor lock in
 - they stop using OpenStack
 And since avoiding single vendor lock in is important to them, well,
 really it has only one result.

 Tl;Dr; Monty is right. We MUST NOT vary the API or behaviour significantly
 or non-discoverably between clouds. Or we simply won't have users.

  
   Looking at your comments in the WG repo 

Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Sean Dague
On 06/04/2015 12:12 PM, Tim Hinrichs wrote:
 Inline.
 
 On Thu, Jun 4, 2015 at 6:40 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 On 06/04/2015 08:52 AM, Adam Young wrote:
  On 06/04/2015 06:32 AM, Sean Dague wrote:
  On 06/03/2015 08:40 PM, Tim Hinrichs wrote:
  As long as there's some way to get the *declarative* policy from the
  system (as a data file or as an API call) that sounds fine.  But I'm
  dubious that it will be easy to keep the API call that returns the
  declarative policy in sync with the actual code that implements that
  policy.
  Um... why? Nova (or any other server project) needs to know what the
  currently computed policy is to actually enforce it internally. Turning
  around and spitting that back out on the wire is pretty straight 
 forward.
 
  Is there some secret dragon I'm missing here?
 
  No.  But it is a significant bit of coding to do;  you would need to
  crawl every API and make sure you hit every code path that could enforce
  policy.
 
 Um, I don't understand that.
 
 I'm saying that you'd GET https://my.nova.api.server/policy;
 
 And it would return basically policy.json. There is no crawling every
 bit, this is a standard entry point to return a policy representation.
 Getting all services to implement this would mean that Keystone could
 support interesting policy things with arbitrary projects, not just a
 small curated list, which is going to be really important in a big tent
 world. Monasca and  Murano are just as important to support here as Nova
 and Swift.
 
 
 
 Definitely agree it'd be great to have an API call that returns policy. 
 The question that I think Adam and I are trying to answer is how do
 projects implement that call?  We've (perhaps implicitly) suggested 3
 different options.
 
 1. Have a data file called say 'default_policy.json' that the
 oslo-policy engine knows how to use (and override with policy.json or
 whatever).  The policy-API call that returns policy then just reads in
 this file and returns it. 
 
 2. Hard-code the return value of the Python function that implements the
 policy-API call.  Different options as to how to do this. 
 
 3. Write code that automatically generates the policy-API result by
 analyzing the code that implements the rest of the API calls (like
 create_vm, delete_vm) and extracting the policy that they implement. 
 This would require hitting all code paths that implement policy, etc.
 
 I'm guessing you had option (2) in mind.  Is that right?  Assuming
 that's the case I see two possibilities.
 
 a. The policy-API call is used internally by Nova to check that an API
 call is permitted before executing it.  (I'm talking conceptually. 
 Obviously you'd not go through http.)
 
 b. The policy-API call is never used internally; rather, each of the
 other API calls (like create-server, delete-server) just use arbitrary
 Python logic to decide whether an API call is permitted or not.  This
 requires the policy-API call implementation to be kept in sync manually
 with the other API calls to ensure the policy-API call returns the
 actual policy.
 
 I'd be happy with (a) and doubt the practicality of (b).

Right, I'm thinking 2 (a). There is some engine internally (presumably
part of oslo.policy) where we can feed it sources in order (sources
could be code structures or files on disk, we already support multi file
with current oslo.policy and incubator code):

  Base + Patch1 + Patch2 + ...

  policy.add(Base)
  policy.add(Patch1)
  policy.add(Patch2)

You can then call:

  policy.enforce(context, rulename, ...) like you do today, it knows
what it's doing.

And you can also call:

  for_export = policy.export()

To dump the computed policy back out. Which is a thing that doesn't
exist today. The GET /policy would just build the same policy engine,
which computes the final rule set, and exports it.

The bulk of the complicated code would be in oslo.policy, so shared.
Different projects have different wsgi stacks, so will have a bit of
different handling code for the request, but the fact that all the
interesting payload is a policy.export() means the development overhead
should be pretty minimal.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [global-requirements][pbr] tarball and git requirements no longer supported in requirements.txt

2015-06-04 Thread Kevin Benton
+1. I had setup a CI for a third-party plugin and the easiest thing to do
to make sure it was running tests with the latest copy of the corresponding
neutron branch was to put the git URL in requirements.txt.

We wanted to always test the latest code so we had early detection of
failures. What's the appropriate way to do that without using a git
reference?

On Thu, Jun 4, 2015 at 2:06 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 On 06/03/2015 11:08 PM, Robert Collins wrote:
  Hi, right now there is a little used (e.g. its not in any active
  project these days) previous feature of pbr/global-requirements:
  we supported things that setuptools does not: to whit, tarball and
  git requirements.
 
  Now, these things are supported by pip, so the implementation
  involved recursing into pip from our setup.py (setup.py - pbr -
  pip). What we exported into setuptools was only the metadata about
  the dependency name. This meant that we were re-entering pip,
  potentially many times - it was, to be blunt, awful.
 
  Fortunately we removed the recursive re-entry into pip in pbr 1.0.
  This didn't remove the ability to parse requirements.txt files
  that contain urls, but it does mean they are converted to the
  simple dependency name when doing 'pip install .' in a project tree
  (or pip install $projectname), and so they are effectively
  unversioned - no lower and no upper bound. This works poorly in the
  gate: please don't use tarball or git urls in requirements.txt (or
  setup.cfg for that matter).
 
  We can still choose to use something from git or a tarball in test
  jobs, *if* thats the right thing (which it rarely is: I'm just
  being clear that the technical capability still exists)... but it
  needs to be done outside of requirements.txt going forward. Its
  also something that we can support with the new constraints system
  if desired [which will operate globally once in place (it is an
  extension of global-requirements)].
 
  One question that this raises, and this is why I wrote the email:
  is there any need to support this at all:- can we say that we won't
  use tarball/vcs support at all and block it as a policy step in
  global requirements? AIUI both git and tarball support is
  problematic for CI jobs due to the increased flakiness of depending
  on network resources... so its actively harmful anyway.
 

 Lots of Neutron modules, like advanced services, or out-of-tree
 plugins, rely on neutron code being checked out from git [1]. I don't
 say it's the way to go forward, and there were plans to stop relying
 on latest git to avoid frequent breakages, but it's not yet implemented.

 [1]:
 http://git.openstack.org/cgit/openstack/neutron-vpnaas/tree/tox.ini#n10

 Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2

 iQEcBAEBCAAGBQJVcBUnAAoJEC5aWaUY1u57N2EH/jUFd0H9pQ7LApSAIlDTEl2v
 WR1EXnc9Vxf5nCWq/qmncj3OCpMDlgL/ZMrFu74LRTDbe38+16kh+Fb+FvBEPGA4
 ZkQC3gyg22Se/QcerTxdPil16hnT912Hr3E0cTuu/4ktyipPrVsO39N56Jbrb6WQ
 SRCrEohIg7C3c0NgFcvBGh+S4rNf8IKT1oLzKrRhSLzIE8lSeGa1GNnSXPAXk19/
 2KIEnqBz3Q5J6umTprB5DFdxMe93Pj6jZmGIMFaHXYgG/yTdKz3zzGM3hpuLyGUQ
 kKYEzFJZ4vf2c6NBg//GYTcAkGjkM2QmAnS+uoztU5vm4QRkLgGcDCz29eQ5ufA=
 =6bUu
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ENROLL state and changing node driver

2015-06-04 Thread Devananda van der Veen
On Jun 4, 2015 5:53 AM, Dmitry Tantsur dtant...@redhat.com wrote:

 Hi!

 While working on the enroll spec [1], I got a thinking: within the new
state machine, when should we allow to change a node driver?

 My initial idea was to only allow driver change in ENROLL. Which sounds
good to me, but then it will be impossible to change a driver after moving
forward: we don't plan on having a way back to ENROLL from MANAGEABLE.

 What do you folks think we should do:
 1. Leave driver field as it was before
 2. Allow changing driver in ENROLL, do not allow later
 3. Allow changing driver in ENROLL only, but create a way back from
MANAGEABLE to ENROLL (unmanage??)


What problem are you trying to solve? Because I don't see a problem with
the current behavior, and you're proposing breaking the API and requiring
users to follow a significantly more complex process should they need to
change what driver is in use for a node, and preventing ever doing that
while a workload is running...

-Deva

 Cheers,
 Dmitry

 [1] https://review.openstack.org/#/c/179151

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why do we drop branches? (WAS: Re: Targeting icehouse-eol?)

2015-06-04 Thread ZZelle
argh

BLOCK push on others stable branches

[access refs/heads/stable/*]
  push =  block group Anonymous Users



On Thu, Jun 4, 2015 at 6:34 PM, ZZelle zze...@gmail.com wrote:

 We can do the opposite to avoid more and more ACLs:

 ALLOW push on some specific stable branches

 [access refs/heads/stable/kilo]
   push = allow group ***-stable-maint

 [access refs/heads/stable/juno]
   push = allow group ***-stable-maint


 BLOCK push on others stable branches

 [access refs/heads/stable/juno]
   push =  block group Anonymous Users


 Cedric/ZZelle@IRC





 On Thu, Jun 4, 2015 at 6:15 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-06-04 16:23:12 +0200 (+0200), Ihar Hrachyshka wrote:
  Why do we even drop stable branches? If anything, it introduces
  unneeded problems to those who have their scripts/cookbooks set to
  chase those branches. They would need to switch to eol tag. Why not
  just leaving them sitting there, marked read only?
 
  It becomes especially important now that we say that stable HEAD *is*
  a stable release.

 It's doable, but we'll need ACL changes applied to every project
 participating in this release model to reject new change submissions
 and prevent anyone from approving them on every branch which reaches
 its EOL date. These ACLs will also grow longer and longer over time
 as we need to add new sections for each EOL branch.

 Also, it seems to me like a feature if downstream consumers have
 to take notice and explicitly adjust their tooling to intentionally
 continue deploying a release for which we no longer provide support
 and security updates.
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Ruby Loo
Hi,

In Kilo, we introduced microversions but it seems to be a work-in-progress.
There is an effort now to add microversion into the API-WG's guidelines, to
provide a consistent way of using microversions across OpenStack projects
[1]. Specifically, in the context of this email, there is a proposed
guideline for when to bump the microversion [2].

Last week, in an IRC discussion [3], Devananda suggested that we bump the
microversion in these situations: ' required for any
non-backwards-compatible change, and strongly encouraged for any
significant features ? (and yes, that's subjective) '.

What do people think of that? I think it is clear that we should do it for
any non-backwards-compatible change. The subjective part worries me a bit
-- who decides, the feature submitter or the cores or ?

Alternatively, if people aren't too worried or care that much, we can
decide to follow the guideline [3] (and limp along until that guideline is
finalized).

--ruby


[1] http://lists.openstack.org/pipermail/openstack-dev/2015-June/065793.html
[2] https://review.openstack.org/#/c/187896/
[3] around 2015-05-26T16:29:05,
http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2015-05-26.log
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 agent rescheduling issue

2015-06-04 Thread Salvatore Orlando
One reason for not sending the heartbeat from a separate greenthread could
be that the agent is already doing it [1].
The current proposed patch addresses the issue blindly - that is to say
before declaring an agent dead let's wait for some more time because it
could be stuck doing stuff. In that case I would probably make the
multiplier (currently 2x) configurable.

The reason for which state report does not occur is probably that both it
and the resync procedure are periodic tasks. If I got it right they're both
executed as eventlet greenthreads but one at a time. Perhaps then adding an
initial delay to the full sync task might ensure the first thing an agent
does when it comes up is sending a heartbeat to the server?

On the other hand, while doing the initial full resync, is the  agent able
to process updates? If not perhaps it makes sense to have it down until it
finishes synchronisation.

Salvatore

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/l3/agent.py#n587

On 4 June 2015 at 16:16, Kevin Benton blak...@gmail.com wrote:

 Why don't we put the agent heartbeat into a separate greenthread on the
 agent so it continues to send updates even when it's busy processing
 changes?
 On Jun 4, 2015 2:56 AM, Anna Kamyshnikova akamyshnik...@mirantis.com
 wrote:

 Hi, neutrons!

 Some time ago I discovered a bug for l3 agent rescheduling [1]. When
 there are a lot of resources and agent_down_time is not big enough
 neutron-server starts marking l3 agents as dead. The same issue has been
 discovered and fixed for DHCP-agents. I proposed a change similar to those
 that were done for DHCP-agents. [2]

 There is no unified opinion on this bug and proposed change, so I want to
 ask developers whether it worth to continue work on this patch or not.

 [1] - https://bugs.launchpad.net/neutron/+bug/1440761
 [2] - https://review.openstack.org/171592

 --
 Regards,
 Ann Kamyshnikova
 Mirantis, Inc

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][reseller] New way to get a project scoped token by name

2015-06-04 Thread Rodrigo Duarte
First I have some questions: if we are going to add a delimiter, how can we
handle OpenStack API Stability guidelines [1]? If we add this delimiter in
keystone.conf (and having the default value being . or /), do we fall
into the same API stability problems?

Personally, I'm in favor of having a way to represent the hierarchy since
it will loose the naming restrictions across projects and domains entities
- resulting in a better UX. The only remaining restriction will be that we
won't be able to create an entity with the same name in the same level of
the hierarchy. In addition, besides the token request, using a delimiter
also solves the problem of representing a hierarchy in SAML Assertions
generated by a keystone identity provider as well.

Also, if we are going to use a delimiter, we need to update the way
projects names are returned in the GET v3/projects API to include the
hierarchy so the user (or client) knows how to request a token using the
project name.

On Thu, Jun 4, 2015 at 4:33 AM, David Chadwick d.w.chadw...@kent.ac.uk
wrote:

 I agree that it is better to choose one global delimiter (ideally this
 should have been done from day one, when hierarchical naming should have
 been used as the basic name form for Openstack). Whatever you choose now
 will cause someone somewhere some pain, but perhaps the overall pain to
 the whole community will be less if you dictate what this delimiter is
 going to be now, but dont introduce for a year. This allows everyone a
 year to remove the delimiter from their names.

 regards

 David

 On 03/06/2015 22:05, Morgan Fainberg wrote:
  Hi David,
 
  There needs to be some form of global hierarchy delimiter - well more to
  the point there should be a common one across OpenStack installations to
  ensure we are providing a good and consistent (and more to the point
  inter-operable) experience to our users. I'm worried a custom defined
  delimiter (even at the domain level) is going to make it difficult to
  consume this data outside of the context of OpenStack (there are
  applications that are written to use the APIs directly).
 
  The alternative is to explicitly list the delimiter in the project (
  e.g. {hierarchy: {delim: ., domain.project.project2}} ). The
  additional need to look up the delimiter / set the delimiter when
  creating a domain is likely to make for a worse user experience than
  selecting one that is not different across installations.
 
  --Morgan
 
  On Wed, Jun 3, 2015 at 12:19 PM, David Chadwick d.w.chadw...@kent.ac.uk
  mailto:d.w.chadw...@kent.ac.uk wrote:
 
 
 
  On 03/06/2015 14:54, Henrique Truta wrote:
   Hi David,
  
   You mean creating some kind of delimiter attribute in the domain
   entity? That seems like a good idea, although it does not solve the
   problem Morgan's mentioned that is the global hierarchy delimiter.
 
  There would be no global hierarchy delimiter. Each domain would
 define
  its own and this would be carried in the JSON as a separate
 parameter so
  that the recipient can tell how to parse hierarchical names
 
  David
 
  
   Henrique
  
   Em qua, 3 de jun de 2015 às 04:21, David Chadwick
   d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk
  mailto:d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk
  escreveu:
  
  
  
   On 02/06/2015 23:34, Morgan Fainberg wrote:
Hi Henrique,
   
I don't think we need to specifically call out that we want a
   domain, we
should always reference the namespace as we do today.
  Basically, if we
ask for a project name we need to also provide it's
  namespace (your
option #1). This clearly lines up with how we handle
 projects in
   domains
today.
   
I would, however, focus on how to represent the namespace in
  a single
(usable) string. We've been delaying the work on this for a
  while
   since
we have historically not provided a clear way to delimit the
   hierarchy.
If we solve the issue with what is the delimiter between
  domain,
project, and subdomain/subproject, we end up solving the
  usability
  
   why not allow the top level domain/project to define the
  delimiter for
   its tree, and to carry the delimiter in the JSON as a new
  parameter.
   That provides full flexibility for all languages and locales
  
   David
  
issues with proposal #1, and not breaking the current
  behavior you'd
expect with implementing option #2 (which at face value
 feels to
   be API
incompatible/break of current behavior).
   
Cheers,
--Morgan
   
On Tue, Jun 2, 2015 at 7:43 AM, Henrique Truta

Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion guideline in API-WG

2015-06-04 Thread Ruby Loo
On 4 June 2015 at 02:58, Xu, Hejie hejie...@intel.com wrote:

  ...
 And another guideline for when we should bump Mircoversion
 *https://review.openstack.org/#/c/187896/*
 https://review.openstack.org/#/c/187896/



This is timely because just this very minute I was going to send out email
to the Ironic community about this -- when *should* we bump the
microversion. For fear of hijacking this thread, I'm going to start a new
thread to discuss with Ironic folks first.


 As I know, there already have a little different between Nova and Ironic’s
 implementation. Ironic return min/max version when the requested
 version doesn’t support in server by http-headers. There isn’t such thing
 in nova. But that is something for version negotiation we need for nova
 also.
 Sean have pointed out we should use response body instead of http headers,
 the body can includes error message. Really hope ironic team can take a
 look at if you guys have compelling reason for using http headers.


I don't want to change the ironic code so let's go with http headers.
(That's a good enough reason, isn't it?)  :-)

By the way, did you see Ironic's spec on the/our desired behaviour between
Ironic's server and client [1]? It's ... add your own adjective here.

Thanks Alex!

--ruby

[1]
http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Dmitry Tantsur

On 06/04/2015 04:40 PM, Ruby Loo wrote:

Hi,

In Kilo, we introduced microversions but it seems to be a
work-in-progress. There is an effort now to add microversion into the
API-WG's guidelines, to provide a consistent way of using microversions
across OpenStack projects [1]. Specifically, in the context of this
email, there is a proposed guideline for when to bump the microversion [2].


As I understand this guideline tells to bump microversion on every 
change which I strongly -1 as usual. Reason: it's bump for the sake of 
bump, without any direct benefit for users (no, API discoverability is 
not one, because microversion do not solve it).


I'll post the same comment to the guideline.



Last week, in an IRC discussion [3], Devananda suggested that we bump
the microversion in these situations: ' required for any
non-backwards-compatible change, and strongly encouraged for any
significant features ? (and yes, that's subjective) '.

What do people think of that? I think it is clear that we should do it
for any non-backwards-compatible change. The subjective part worries me
a bit -- who decides, the feature submitter or the cores or ?


My vote is: if we can proof that a sane user will be broken by the 
change, we bump the microversion.




Alternatively, if people aren't too worried or care that much, we can
decide to follow the guideline [3] (and limp along until that guideline
is finalized).

--ruby


[1] http://lists.openstack.org/pipermail/openstack-dev/2015-June/065793.html
[2] https://review.openstack.org/#/c/187896/
[3] around 2015-05-26T16:29:05,
http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2015-05-26.log


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Time to decide something on the vendor tools repo

2015-06-04 Thread Ruby Loo
On 4 June 2015 at 09:29, Dmitry Tantsur dtant...@redhat.com wrote:

On the summit we were discussing things like chassis discovery, and arrived
 at rough conclusion that we want it to be somewhere in a separate repo.
 More precisely, we wanted some place for vendor to contribute code (aka
 scripts) that aren't good fit for both standard interfaces and existing
 vendor passthrough (chassis discovery again is a good example).

 Our summit notes are sparse on this topic [1], but I'll add in what I see
there.


 I suggest to decide something finally to unblock people. A few questions
 follow:

 Should we
 1. create one repo for all vendors (say, ironic-contrib-tools)


from summit, the advantage of having them in one place is that they will
get used and improved by other vendors

Presumably there'd be subdirectories, one for each vendor and/or based on
functionality.

The disadvantage maybe, is who will maintain/own (and merge stuff) in this
repo.


2. create a repo for every vendor appearing


from summit, Creating an ironic-utils-vendor repo on stackforge to host
each manufacturer set of tools and document the directo


 3. ask vendors to go for stackforge, at least until their solution shapes
 (like we did with inspector)?


I think whatever is decided, should be in stackforge.


 4. %(your_variant)s

 If we go down 1-2 route, should
 1. ironic-core team own the new repo(s)?
 2. or should we form a new team from interested people?
 (1 and 2 and not exclusive actually).

 I personally would go for #3 - stackforge. We already have e.g.
 stackforge/proliantutils as an example of something closely related to
 Ironic, but still independent.

 I'm also fine with #1#1 (one repo, owned by group of interested people).


I don't think any of these third-party tools should be owned by the ironic
team.

I think that interested parties should get together (or not) to decide
where they'd like to put their stuff. Ironic wiki pages (or somewhere) can
have a link pointing to these other repo(s).

--ruby

[1] https://etherpad.openstack.org/p/liberty-ironic-rack-to-ready-state
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Dmitry Tantsur

On 06/04/2015 05:03 PM, Sean Dague wrote:

On 06/04/2015 10:50 AM, Dmitry Tantsur wrote:

On 06/04/2015 04:40 PM, Ruby Loo wrote:

Hi,

In Kilo, we introduced microversions but it seems to be a
work-in-progress. There is an effort now to add microversion into the
API-WG's guidelines, to provide a consistent way of using microversions
across OpenStack projects [1]. Specifically, in the context of this
email, there is a proposed guideline for when to bump the microversion
[2].


As I understand this guideline tells to bump microversion on every
change which I strongly -1 as usual. Reason: it's bump for the sake of
bump, without any direct benefit for users (no, API discoverability is
not one, because microversion do not solve it).

I'll post the same comment to the guideline.


Backwards compatible API adds with no user signaling is a fallacy
because it assumes the arrow of time flows only one way.

If at version 1.5 you have a resource that is

foo {
   bar: ...
}

And then you decide you want to add another attribute

foo {
   bar: ...
   baz: ...
}

And you don't bump the version, you'll get a set of users that use a
cloud with baz, and incorrectly assume that version 1.5 of the API means
that baz will always be there. Except, there are lots of clouds out
there, including ones that might be at the code commit before it was
added. Because there are lots of deploys in the world, your users can
effectively go back in time.

So now your API definition for version 1.5 is:

foo, may or may not contain baz, and there is no way of you knowing if
it will until you try. good luck.

Which is pretty aweful.


Which is not very different from your definition. Version 1.5 contains 
feature xyz, unless it's disabled by the configuration or patched out 
downstream. Well, 1.4 can also contain the feature, if downstream 
backported it. So good luck again.


If you allow to group features under one microversion, that becomes even 
worse - you can have deployment that got microversion only partially.


For example, that's what I would call API discoverability:

 $ ironic has-capability foobar
 true

and that's how it would play with versioning:

 $ ironic --ironic-api-version 1.2 has-capability foobar
 false
 $ ironic --ironic-api-version 1.6 has-capability foobar
 true

On the contrary, the only thing that microversion tells me is that the 
server installation is based on a particular upstream commit.


To me these are orthogonal problems, and I believe they should be solved 
differently. Our disagreement is due to seeing them as one problem.




Looking at your comments in the WG repo you also seem to only be
considering projects shipped at Major release versions (Kilo, Liberty).
Which might be true of Red Hat's product policy, but it's not generally
true that all clouds are at a release boundary. Continous Deployment of
OpenStack has been a value from Day 1, and many public clouds are not
using releases, but are using arbitrary points off of master.


I don't know why you decided I don't know it, but you're wrong.

 A

microversion describes when a changes happens so that applications
writers have a very firm contract about what they are talking to.


No, they don't. Too many things can modify behavior - see above.



-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Lucas Alvares Gomes
Hi Ruby,

Thanks for starting this thread, just like you I've been always
confused about when and when not bump the microversioning of the API.

 Backwards compatible API adds with no user signaling is a fallacy
 because it assumes the arrow of time flows only one way.

 If at version 1.5 you have a resource that is

 foo {
   bar: ...
 }

 And then you decide you want to add another attribute

 foo {
   bar: ...
   baz: ...
 }

 And you don't bump the version, you'll get a set of users that use a
 cloud with baz, and incorrectly assume that version 1.5 of the API means
 that baz will always be there. Except, there are lots of clouds out
 there, including ones that might be at the code commit before it was
 added. Because there are lots of deploys in the world, your users can
 effectively go back in time.

 So now your API definition for version 1.5 is:

 foo, may or may not contain baz, and there is no way of you knowing if
 it will until you try. good luck.

 Which is pretty aweful.


Oh, that's a good point, I can see the value on that.

Perhaps the guide should define bumping the micro version something
along these words: Whenever a change is made to the API which is
visible to the client the micro version should be incremented ?

This is powerful because gives the clients a fine grained way to
detect what are the API features available.

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >