[openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion guideline in API-WG

2015-06-04 Thread Xu, Hejie
Hi, guys,

I'm working on adding Microversion into the API-WG's guideline which make sure 
we have consistent Microversion behavior in the API for user.
The Nova and Ironic already have Microversion implementation, and as I know 
Magnum https://review.openstack.org/#/c/184975/ is going to implement 
Microversion also.

Hope all the projects which support( or plan to) Microversion can join the 
review of guideline.

The Mircoversion specification(this almost copy from nova-specs): 
https://review.openstack.org/#/c/187112
And another guideline for when we should bump Mircoversion 
https://review.openstack.org/#/c/187896/

As I know, there already have a little different between Nova and Ironic's 
implementation. Ironic return min/max version when the requested
version doesn't support in server by http-headers. There isn't such thing in 
nova. But that is something for version negotiation we need for nova also.
Sean have pointed out we should use response body instead of http headers, the 
body can includes error message. Really hope ironic team can take a
look at if you guys have compelling reason for using http headers.

And if we think return body instead of http headers, we probably need think 
about back-compatible also. Because Microversion itself isn't versioned.
So I think we should keep those header for a while, does make sense?

Hope we have good guideline for Microversion, because we only can change 
Mircoversion itself by back-compatible way.

Thanks
Alex Xu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] [oslo] [cross-project] Dynamic Policy

2015-06-04 Thread David Chadwick
I dont see why this is not possible. The DB has an API which allows each
service to retrieve its own policy, so with an appropriate helper app
then any service should be able to read a serialised policy from the DB
as if it were reading in a file from the local file store

regards

David

On 04/06/2015 01:46, Hu, David J (Converged Cloud) wrote:
> I am not a big fan of putting admins through a multi-step process.  It looks 
> like admins will need to learn unified policy file first, then 1 or 2 or more 
> releases later, learn about policy in the db.  I understand we are doing 
> things incrementally.  I would prefer that we come up with something or some 
> process that voids the hassle of dealing with unified policy file for admins. 
>  In other words, admins go straight from policy file as is today to policy in 
> the db.
> 
> 
> David
> 
> 
> -Original Message-
> From: Adam Young [mailto:ayo...@redhat.com] 
> Sent: Wednesday, June 3, 2015 4:39 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [keystone] [nova] [oslo] [cross-project] Dynamic 
> Policy
> 
> On 06/03/2015 02:55 PM, Sean Dague wrote:
>> On 06/03/2015 02:44 PM, David Chadwick wrote:
>>> In the design that we have been building for a policy administration 
>>> database, we dont require a single policy in order to unify common 
>>> concepts such as hierarchical attributes and roles between the 
>>> different policies of Openstack services. This is because policies 
>>> and hierarchies are held separately and are linked via a many to many 
>>> relationship. My understanding of Adam's primary requirement was that 
>>> a role hierarchy say, should be common across all OpenStack service 
>>> policies, without this necessarily meaning you have to have one huge 
>>> policy. And there is no requirement for Keystone to own all the 
>>> policies. So each service could still own and manage its own policy, 
>>> whilst having attribute hierarchies in common.
>>>
>>> Does this help?
>>>
>>> regards
>>>
>>> David
>> That part makes total sense. What concerned me is there was an 
>> intermediary step that seemed like it was literally *one file* 
>> (https://review.openstack.org/134656). That particular step I think is 
>> unworkable.
> 
> How is this for an approach:
> 
> 1.  Unified policy  file that is just the union of what is in the current 
> projects.  Each project will have a clearly marked section.
> 
> 2.  Split up the main file into sections, one per each project, and put those 
> in separate files.  Build system will concatenate them into a single file.
> 
> 3.  Allow each of the projects to replace their section of the file with file 
> containing just an URL to the upstream git repo that contains their project 
> specific section.  When building the overall unified policy file, those 
> projects that have their own section will get it merged in from their own 
> repos.
> 
> 4.  Eventually, the unified policy file will be expected to be built out of 
> each of the projects git repos.
> 
> I agree with you that we want the projects to manage their own, I just think 
> we need a scrub step where we all look at the individual sections together 
> with a critical eye first.
> 
>>
>> By "common role hierachy" do you mean namespaced roles for services?
>> Because if yes, definitely. And I think that's probably the first
>> concrete step moving the whole thing forward, which should be doable on
>> the existing static json definitions.
>>
>>  -Sean
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][reseller] New way to get a project scoped token by name

2015-06-04 Thread David Chadwick
I agree that it is better to choose one global delimiter (ideally this
should have been done from day one, when hierarchical naming should have
been used as the basic name form for Openstack). Whatever you choose now
will cause someone somewhere some pain, but perhaps the overall pain to
the whole community will be less if you dictate what this delimiter is
going to be now, but dont introduce for a year. This allows everyone a
year to remove the delimiter from their names.

regards

David

On 03/06/2015 22:05, Morgan Fainberg wrote:
> Hi David,
> 
> There needs to be some form of global hierarchy delimiter - well more to
> the point there should be a common one across OpenStack installations to
> ensure we are providing a good and consistent (and more to the point
> inter-operable) experience to our users. I'm worried a custom defined
> delimiter (even at the domain level) is going to make it difficult to
> consume this data outside of the context of OpenStack (there are
> applications that are written to use the APIs directly).
> 
> The alternative is to explicitly list the delimiter in the project (
> e.g. {"hierarchy": {"delim": ".", "domain.project.project2"}} ). The
> additional need to look up the delimiter / set the delimiter when
> creating a domain is likely to make for a worse user experience than
> selecting one that is not different across installations.
> 
> --Morgan
> 
> On Wed, Jun 3, 2015 at 12:19 PM, David Chadwick  > wrote:
> 
> 
> 
> On 03/06/2015 14:54, Henrique Truta wrote:
> > Hi David,
> >
> > You mean creating some kind of "delimiter" attribute in the domain
> > entity? That seems like a good idea, although it does not solve the
> > problem Morgan's mentioned that is the global hierarchy delimiter.
> 
> There would be no global hierarchy delimiter. Each domain would define
> its own and this would be carried in the JSON as a separate parameter so
> that the recipient can tell how to parse hierarchical names
> 
> David
> 
> >
> > Henrique
> >
> > Em qua, 3 de jun de 2015 às 04:21, David Chadwick
> > mailto:d.w.chadw...@kent.ac.uk>
> >>
> escreveu:
> >
> >
> >
> > On 02/06/2015 23:34, Morgan Fainberg wrote:
> > > Hi Henrique,
> > >
> > > I don't think we need to specifically call out that we want a
> > domain, we
> > > should always reference the namespace as we do today.
> Basically, if we
> > > ask for a project name we need to also provide it's
> namespace (your
> > > option #1). This clearly lines up with how we handle projects in
> > domains
> > > today.
> > >
> > > I would, however, focus on how to represent the namespace in
> a single
> > > (usable) string. We've been delaying the work on this for a
> while
> > since
> > > we have historically not provided a clear way to delimit the
> > hierarchy.
> > > If we solve the issue with "what is the delimiter" between
> domain,
> > > project, and subdomain/subproject, we end up solving the
> usability
> >
> > why not allow the top level domain/project to define the
> delimiter for
> > its tree, and to carry the delimiter in the JSON as a new
> parameter.
> > That provides full flexibility for all languages and locales
> >
> > David
> >
> > > issues with proposal #1, and not breaking the current
> behavior you'd
> > > expect with implementing option #2 (which at face value feels to
> > be API
> > > incompatible/break of current behavior).
> > >
> > > Cheers,
> > > --Morgan
> > >
> > > On Tue, Jun 2, 2015 at 7:43 AM, Henrique Truta
> > >  
> >  >
> >  
> >   > >
> > > Hi folks,
> > >
> > >
> > > In Reseller[1], we’ll have the domains concept merged into
> > projects,
> > > that means that we will have projects that will behave
> as domains.
> > > Therefore, it will be possible to have two projects with
> the same
> > > name in a hierarchy, one being a domain and another being a
> > regular
> > > project. For instance, the following hierarchy will be
> valid:
> > >
> > > A - is_domain project, with domain A
> > >
> > > |
> > >
> > > B - project
> > >
> > > |

Re: [openstack-dev] [keystone][barbican] Regarding exposing X-Group-xxxx in token validation

2015-06-04 Thread Morgan Fainberg
For Fernet, the groups would only be populated on validate as Dolph outlined. 
They would not be added to the core payload. We do not want to expand the 
payload in this manner. 

--Morgan

Sent via mobile

> On Jun 3, 2015, at 21:51, Lance Bragstad  wrote:
> 
> I feel if we allowed group ids to be an attribute of the Fernet's core 
> payload, we continue to open up the possibility for tokens to be greater than 
> the initial "acceptable" size limit for a Fernet token (which I believe was 
> 255 bytes?). With this, I think we need to provide guidance on the number of 
> group ids allowed within the token before that size limit is compromised.
> 
> We've landed patches recently that allow for id strings to be included in the 
> Fernet payload [0], regardless of being uuid format (which can be converted 
> to bytes before packing to save space, this is harder for us to do with 
> non-uuid format id strings). This can also cause the Fernet token size to 
> grow. If we plan to include more information in the Fernet token payload I 
> think we should determine if the original acceptable size limit still applies 
> and regardless of what that size limit is provide some sort of "best 
> practices" for helping deployments keep their token size as small as possible.
> 
> 
> Keeping the tokens user (and developer) friendly was a big plus in the design 
> of Fernet, and providing resource for deployments to maintain that would be 
> helpful.
> 
> 
> [0] 
> https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:bug/1459382,n,z
> 
>> On Wed, Jun 3, 2015 at 10:19 PM, Steve Martinelli  
>> wrote:
>> Dozens to hundreds of roles or endpoints could cause an issue now :) 
>> 
>> But yeah, groups are much more likely to number in the dozens than roles or 
>> endpoints. But I think the Fernet token size is so small that it could 
>> probably handle this (since it does so now for the federated workflow). 
>> 
>> Thanks,
>> 
>> Steve Martinelli
>> OpenStack Keystone Core 
>> 
>> 
>> 
>> From:"Fox, Kevin M"  
>> To:"OpenStack Development Mailing List (not for usage questions)" 
>>  
>> Date:06/03/2015 11:14 PM 
>> Subject:Re: [openstack-dev] [keystone][barbican] Regarding
>> exposingX-Group- in token validation 
>> 
>> 
>> 
>> Will dozens to a hundred groups or so on one user cause issues? :)
>> 
>> Thanks,
>> Kevin 
>>   
>> From: Morgan Fainberg
>> Sent: Wednesday, June 03, 2015 7:23:22 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
>> X-Group- in token validation
>> 
>> In general I am of the opinion with the move to Fernet there is no good 
>> reason we should avoid adding the group information into the token. 
>> 
>> --Morgan
>> 
>> Sent via mobile 
>> 
>> On Jun 3, 2015, at 18:44, Dolph Mathews  wrote:
>> 
>> 
>> On Wed, Jun 3, 2015 at 5:58 PM, John Wood  wrote: 
>> Hello folks, 
>> 
>> There has been discussion about adding user group support to the per-secret 
>> access control list (ACL) feature in Barbican. Hence secrets could be marked 
>> as accessible by a group on the ACL rather than an individual user as 
>> implemented now. 
>> 
>> Our understanding is that Keystone does not pass along a user’s group 
>> information during token validation however (such as in the form of 
>> X-Group-Ids/X-Group-Names headers passed along via Keystone middleware). 
>> 
>> The pre-requisite for including that information in the form of headers 
>> would be adding group information to the token validation response. In the 
>> case of UUID, it would be pre-computed and stored in the DB at token 
>> creation time. In the case of PKI, it would be encoded into the PKI token 
>> and further bloat PKI tokens. And in the case of Fernet, it would be 
>> included at token validation time. 
>> 
>> Including group information, however, would also let us efficient revoke 
>> tokens using token revocation events when group membership is affected in 
>> any way (user being removed from a group, a group being deleted, or a 
>> group-based role assignment being revoked). The OS-FEDERATION extension is 
>> actually already including groups in tokens today, as a required part of the 
>> federated workflow. We'd effectively be introducing that same behavior into 
>> the core Identity API (see the federated token example): 
>> 
>>   
>> https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-federation-ext.rst#request-an-unscoped-os-federation-token
>>  
>> 
>> This would allow us to address bugs such as: 
>> 
>>   https://bugs.launchpad.net/keystone/+bug/1268751 
>> 
>> In the past, we shied away from including groups if only to avoid bloating 
>> the size of PKI tokens any further (but now we have Fernet tokens providing 
>> a viable alternative). Are there any other reasons not to add group 
>> information to the token validation respon

Re: [openstack-dev] [Manila] Midcycle meetup dates

2015-06-04 Thread Thomas Bechtold
On 30.05.2015 16:11, Ben Swartzlander wrote:
> What remains to be decided:
> 
> Week of July 20-24 or week of July 27-31?

Both fine for me, but I can't join in person.

> Which days of the week?

Wed-Thu would be good.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Getting `ValueError: Field `volume_id' cannot be None`

2015-06-04 Thread Deepak Shetty
Hi Thang,
  Since you are working on Snapshot Objects, any idea on why the testcase
when run all by itself, works, but when run as part of the overall suite,
fails ?
This seems to be related to the Snapshot Objects, hence Ccing you.

On Wed, Jun 3, 2015 at 9:54 PM, Deepak Shetty  wrote:

> Hi All,
>   I am hitting a strange issue when running Cinder unit tests against my
> patch @
> https://review.openstack.org/#/c/172808/5
>
> I have spent 1 day and haven't been successfull at figuring how/why my
> patch is causing it!
>
> All tests failing are part of VolumeTestCase suite and from the error (see
> below) it seems
> the Snapshot Object is complaining that 'volume_id' field is null (while
> it shouldn't be)
>
> An example error from the associated Jenkins run can be seen @
>
> http://logs.openstack.org/08/172808/5/check/gate-cinder-python27/0abd15e/console.html.gz#_2015-05-22_13_28_47_140
>
> I am seeing a total of 21 such errors.
>
> Its strange because, when I try to reproduce it locally in my devstack
> env, I see the below:
>
> 1) When i just run: ./run_tests.sh -N
> cinder.tests.unit.test_volume.VolumeTestCase
> all testcases pass
>
> 2) When i run 1 individual testcase: ./run_tests.sh -N
> cinder.tests.unit.test_volume.VolumeTestCase.test_delete_busy_snapshot
> that passes too
>
> 3) When i run : ./run_tests.sh -N
> I see 21 tests failing and all are failing with error similar to the below
>
> {0} cinder.tests.unit.test_volume.VolumeTestCase.test_delete_busy_snapshot
> [0.537366s] ... FAILED
>
> Captured traceback:
> ~~~
> Traceback (most recent call last):
>   File "cinder/tests/unit/test_volume.py", line 3219, in
> test_delete_busy_snapshot
> snapshot_obj = objects.Snapshot.get_by_id(self.context,
> snapshot_id)
>   File
> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 163,
> in wrapper
> result = fn(cls, context, *args, **kwargs)
>   File "cinder/objects/snapshot.py", line 130, in get_by_id
> expected_attrs=['metadata'])
>   File "cinder/objects/snapshot.py", line 112, in _from_db_object
> snapshot[name] = value
>   File
> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 675,
> in __setitem__
> setattr(self, name, value)
>   File
> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 70,
> in setter
> field_value = field.coerce(self, name, value)
>   File
> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py", line
> 182, in coerce
> return self._null(obj, attr)
>   File
> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py", line
> 160, in _null
> raise ValueError(_("Field `%s' cannot be None") % attr)
> ValueError: Field `volume_id' cannot be None
>
> Any suggestions / thoughts on why this could be happening ?
>
> thanx,
> deepak
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Targeting icehouse-eol?

2015-06-04 Thread Thierry Carrez
Matthew Treinish wrote:
> On Wed, Jun 03, 2015 at 09:06:29AM -0500, Matt Riedemann wrote:
>> Following on the thread about no longer doing stable point releases [1] at
>> the summit we talked about doing icehouse-eol pretty soon [2].
>>
>> I scrubbed the open stable/icehouse patches last week and we're down to at
>> least one screen of changes now [3].
>>
>> My thinking was once we've processed that list, i.e. either approved what
>> we're going to approve or -2 what we aren't, then we should proceed with
>> doing the icehouse-eol tag and deleting the branch.
>>
>> Is everyone generally in agreement with doing this soon?  If so, then I'm
>> thinking target a week from today for the stable maint core team to scrub
>> the list of open reviews in the next week and we then get the infra team to
>> tag the branch and close it out.
> 
> Not really a surprise, but I support doing this soon. Next week seems fine
> to me.

Right, I also support doing this sometimes this month.

>> The only open question I have is if we need to do an Icehouse point release
>> prior to the tag and dropping the branch, but I don't think that's happened
>> in the past with branch end of life - the eol tag basically serves as the
>> placeholder to the last 'release'.
> 
> I don't think we need to do a point release, there will be the icehouse-eol
> tag which will mark the same thing. But, even if we later decide to add a
> point release to mark the same thing it is trivial to push another tag for
> the same sha1.

I CC-ed the stable branch release managers for their opinion on it. We
definitely announced a 2014.1.5 last icehouse release, so I think we
should probably do one. Ideally we would have time to coordinate it in
the coming week so that both plans are compatible.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] openstacklib::db::sync proposal

2015-06-04 Thread Martin Mágr


On 06/03/2015 02:32 PM, Martin Mágr wrote:

is *not* necessary


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] Liberty summit: Updates in Glance

2015-06-04 Thread Flavio Percoco

On 04/06/15 12:19 +0900, Joe Gordon wrote:



On Mon, Jun 1, 2015 at 6:11 AM, Flavio Percoco  wrote:

   On 01/06/15 13:30 +0100, John Garbutt wrote:

   On 1 June 2015 at 13:10, Flavio Percoco  wrote:

   On 01/06/15 11:57 +0100, John Garbutt wrote:


   On 26/05/15 13:54 -0400, Nikhil Komawar wrote:


   On 5/26/15 12:57 PM, Jesse Cook wrote:
      We also had some hallway talk about putting the v1
   and v2 APIs on top
   of
      the v3 API. This forces faster adoption, verifies
   supportability via
   v1
   and
      v2 tests, increases supportability of v1 and v2
   APIs, and pushes out
   the
      need to kill v1 API.

   Let's discuss more as time and development progresses
   on that
   possibility.
   v3
   API should stay EXPERIMENTAL for now as that would help
   us understand
   use-cases
   across programs as it gets adopted by various
   code-bases. Putting v1/v2
   on
   top
   of v3 would be tricky for now as we may have breaking
   changes with code
   being
   relatively-less stable due to narrow review domain.




   I actually think we'd benefit more from having V2 on top of
   V3 than
   not doing it. I'd probably advocate to make this M material
   rather
   than L but I think it'd be good.

   I think regardless of what we do, I'd like to kill v1 as it
   has a
   sharing model that is not secure.



   Given v1 has lots of users, killing it will be hard.

   If you maintained v1 support as v1 on top of v3 (or v2 I
   guess), could
   you not do something like permanently disable the "bad bits" of
   the
   API?


   I agree it'll be hard but, at least for v1, I believe it should
   happen. It has some security issues (mostly related to image
   sharing)
   that are not going to be fixed there.


   OK, I guess you mean this one:
   https://wiki.openstack.org/wiki/OSSN/1226078


   The idea being, users listing their images, and updating image
   metadata via v1, don't get broken during the evolution?


   The feedback we got at the summit (even from OPs) was that we could
   go
   ahead, mark it as deprecated, give a deprecation period and then
   turn
   it off.


   I am surprised by that reply, but OK.


   FWIW, moving Nova from glance v1 to glance v2, without breaking
   Nova's
   public API, will require someone getting a big chunk of glance
   v1 on
   top of glance v2.


   AFAIK, the biggest issue right now is "changed-since" which is
   something Glance doesn't have in v2 but it's exposed throught
   Nova's
   image API.


   Thats the big unanswered question that needs fixing in any spec we
   would approve around this effort.


   I don't have an answer myself right now.




   I'm happy you brought this up. What are Nova's plans to adopt
   Glance's
   v2 ? I heard there was a discussion and something along the lines
   of
   creating a library that wraps both APIs came up.


   We don't have anyone who has stepped up to work on it at his point.

   I think the push you made around this effort in kilo is the latest
   updated on this:
   http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/
   remove-glanceclient-wrapper.html

   It would be great if we could find a glance/nova CPL to drive this
   effort.


   So, unless the library is proposed and some magic happens, is it safe
   to assume that the above spec is still valid and that folks can work
   on it?




   Where can I find more info about this?


   I suspect it will be included on our liberty priority TODO list, that
   I am yet to write, but I expect to appear here:
   http://specs.openstack.org/openstack/nova-specs/


   I really think nova should put some more effort on helping this
   happen. The work I did[0] - all red now, I swear it wasn't - during
   Kilo didn't get enough attention even before we decided to push it
   back. Not a complain, really. However, I'd love to see some
   cross-project effo

Re: [openstack-dev] [Cinder] Getting `ValueError: Field `volume_id' cannot be None`

2015-06-04 Thread Deepak Shetty
I was able to narrow down to the scenario where it fails only when i do:

./run_tests.sh -N cinder.tests.unit.test_remotefs
cinder.tests.unit.test_volume.VolumeTestCase

and fails with:
{0}
cinder.tests.unit.test_volume.VolumeTestCase.test_can_delete_errored_snapshot
[0.507361s] ... FAILED

Captured traceback:
~~~
Traceback (most recent call last):
  File "cinder/tests/unit/test_volume.py", line 3029, in
test_can_delete_errored_snapshot
snapshot_obj = objects.Snapshot.get_by_id(self.context, snapshot_id)
  File
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 169,
in wrapper
result = fn(cls, context, *args, **kwargs)
  File "cinder/objects/snapshot.py", line 130, in get_by_id
expected_attrs=['metadata'])
  File "cinder/objects/snapshot.py", line 112, in _from_db_object
snapshot[name] = value
  File
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 691,
in __setitem__
setattr(self, name, value)
  File
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 70,
in setter
field_value = field.coerce(self, name, value)
  File
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py", line
183, in coerce
return self._null(obj, attr)
  File
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py", line
161, in _null
raise ValueError(_("Field `%s' cannot be None") % attr)
ValueError: Field `volume_id' cannot be None

Both the testsuites run fine when i run them individually, as in the below
is success:

./run_tests.sh -N cinder.tests.unit.test_remotefs - no errors

./run_tests.sh -N cinder.tests.unit.test_volume.VolumeTestCase - no errors

So i modified my patch @ https://review.openstack.org/#/c/172808/ (Patch
set 6) and
removed all testcase i added in test_remotefs.py except one, so that we
have lesser code to debug/deal with!

See
https://review.openstack.org/#/c/172808/6/cinder/tests/unit/test_remotefs.py

Now when i disable test_create_snapshot_online_success then running both
the suites work,
but when i enable test_create_snapshot_online_success then it fails as
above.

I am unable to figure whats the connection between
test_create_snapshot_online_success
in test_remotefs.py
and VolumeTestCase.test_can_delete_errored_snapshot in test_volume.py
failure

Can someone help here ?

thanx,
deepak



On Thu, Jun 4, 2015 at 1:37 PM, Deepak Shetty  wrote:

> Hi Thang,
>   Since you are working on Snapshot Objects, any idea on why the testcase
> when run all by itself, works, but when run as part of the overall suite,
> fails ?
> This seems to be related to the Snapshot Objects, hence Ccing you.
>
> On Wed, Jun 3, 2015 at 9:54 PM, Deepak Shetty  wrote:
>
>> Hi All,
>>   I am hitting a strange issue when running Cinder unit tests against my
>> patch @
>> https://review.openstack.org/#/c/172808/5
>>
>> I have spent 1 day and haven't been successfull at figuring how/why my
>> patch is causing it!
>>
>> All tests failing are part of VolumeTestCase suite and from the error
>> (see below) it seems
>> the Snapshot Object is complaining that 'volume_id' field is null (while
>> it shouldn't be)
>>
>> An example error from the associated Jenkins run can be seen @
>>
>> http://logs.openstack.org/08/172808/5/check/gate-cinder-python27/0abd15e/console.html.gz#_2015-05-22_13_28_47_140
>>
>> I am seeing a total of 21 such errors.
>>
>> Its strange because, when I try to reproduce it locally in my devstack
>> env, I see the below:
>>
>> 1) When i just run: ./run_tests.sh -N cinder.tests.unit.test_volume.
>> VolumeTestCase
>> all testcases pass
>>
>> 2) When i run 1 individual testcase: ./run_tests.sh -N
>> cinder.tests.unit.test_volume.VolumeTestCase.test_delete_busy_snapshot
>> that passes too
>>
>> 3) When i run : ./run_tests.sh -N
>> I see 21 tests failing and all are failing with error similar to the below
>>
>> {0} cinder.tests.unit.test_volume.VolumeTestCase.test_delete_busy_snapshot
>> [0.537366s] ... FAILED
>>
>> Captured traceback:
>> ~~~
>> Traceback (most recent call last):
>>   File "cinder/tests/unit/test_volume.py", line 3219, in
>> test_delete_busy_snapshot
>> snapshot_obj = objects.Snapshot.get_by_id(self.context,
>> snapshot_id)
>>   File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
>> line 163, in wrapper
>> result = fn(cls, context, *args, **kwargs)
>>   File "cinder/objects/snapshot.py", line 130, in get_by_id
>> expected_attrs=['metadata'])
>>   File "cinder/objects/snapshot.py", line 112, in _from_db_object
>> snapshot[name] = value
>>   File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
>> line 675, in __setitem__
>> setattr(self, name, value)
>>   File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
>> line 70, in setter
>> field_value = field.coerce(self, name, value)

Re: [openstack-dev] [Glance] [all] Liberty summit: Updates in Glance

2015-06-04 Thread John Garbutt
On 3 June 2015 at 00:47, Fei Long Wang  wrote:
> On 02/06/15 01:51, Jay Pipes wrote:
>> Please assign me as the CPL for Glance from Nova.
> I'm happy  to work with Jay for Nova from Glance :)

I have added you both here:
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Inter-project_Liaisons

Please do double check I got that correct.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] Liberty summit: Updates in Glance

2015-06-04 Thread John Garbutt
On 4 June 2015 at 09:51, John Garbutt  wrote:
> On 3 June 2015 at 00:47, Fei Long Wang  wrote:
>> On 02/06/15 01:51, Jay Pipes wrote:
>>> Please assign me as the CPL for Glance from Nova.
>> I'm happy  to work with Jay for Nova from Glance :)
>
> I have added you both here:
> https://wiki.openstack.org/wiki/CrossProjectLiaisons#Inter-project_Liaisons
>
> Please do double check I got that correct.

Sorry, I missed out the thank you, which was the main reason I sent email.

Thank you both for stepping up here!

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] How to deal with aborted image read?

2015-06-04 Thread Flavio Percoco

On 03/06/15 16:46 -0600, Chris Friesen wrote:
We recently ran into an issue where nova couldn't write an image file 
due to lack of space and so just quit reading from glance.


This caused glance to be stuck with an open file descriptor, which 
meant that the image consumed space even after it was deleted.


I have a crude fix for nova at 
"https://review.openstack.org/#/c/188179/"; which basically continues 
to read the image even though it can't write it.  That seems less than 
ideal for large images though.


Is there a better way to do this?  Is there a way for nova to indicate 
to glance that it's no longer interested in that image and glance can 
close the file?


If I've followed this correctly, on the glance side I think the code 
in question is ultimately 
glance_store._drivers.filesystem.ChunkedFile.__iter__().


Actually, to be honest, I was quite confused by the email :P

Correct me if I still didn't understand what you're asking.

You ran out of space on the Nova side while downloading the image and
there's a file descriptor leak somewhere either in that lovely (sarcasm)
glance wrapper or in glanceclient.

Just by reading your email and glancing your patch, I believe the bug
might be in glanceclient but I'd need to five into this. The piece of
code you'll need to look into is[0].

glance_store is just used server side. If that's what you meant -
glance is keeping the request and the ChunkedFile around - then yes,
glance_store is the place to look into.

[0] 
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v1/images.py#L152

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgp0DL1sr4LBf.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] Liberty summit: Updates in Glance

2015-06-04 Thread John Garbutt
On 1 June 2015 at 14:11, Flavio Percoco  wrote:
> On 01/06/15 13:30 +0100, John Garbutt wrote:
>> On 1 June 2015 at 13:10, Flavio Percoco  wrote:
>>> On 01/06/15 11:57 +0100, John Garbutt wrote:
>>> I'm happy you brought this up. What are Nova's plans to adopt Glance's
>>> v2 ? I heard there was a discussion and something along the lines of
>>> creating a library that wraps both APIs came up.
>>
>> We don't have anyone who has stepped up to work on it at his point.
>>
>> I think the push you made around this effort in kilo is the latest
>> updated on this:
>>
>> http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/remove-glanceclient-wrapper.html
>>
>> It would be great if we could find a glance/nova CPL to drive this effort.
>
> So, unless the library is proposed and some magic happens, is it safe
> to assume that the above spec is still valid and that folks can work
> on it?

Well it will need re-approving for Liberty, but we can get that done
very quickly.

I am assuming Jay is pushing this now, which is cool.

Let me know how I can help there.

>>> Where can I find more info about this?
>>
>> I suspect it will be included on our liberty priority TODO list, that
>> I am yet to write, but I expect to appear here:
>> http://specs.openstack.org/openstack/nova-specs/
>>
>>> I really think nova should put some more effort on helping this
>>> happen. The work I did[0] - all red now, I swear it wasn't - during
>>> Kilo didn't get enough attention even before we decided to push it
>>> back. Not a complain, really. However, I'd love to see some
>>> cross-project efforts on making this happen.
>>> [0] https://review.openstack.org/#/c/144875/
>>
>>
>> As there is no one to work on the effort, we haven't made it a
>> priority for liberty.
>>
>> If someone is able to step up to help complete the work, I can do my
>> best to help get that effort reviewed, by raising its priority, just
>> as we did in Kilo.
>
> IIRC, the patch wasn't far from being ready.

I think we reviewed this at the mid cycle.
Basically the patch needs splitting up into little chunks, where possible.
Anyways, lets not get distracted here.

>The latest patch-sets
> relied on the gate to run some tests and the biggest issue I had -
> still have - is that this script[0] didn't even use glanceclient but
> direct http calls. The issue, to be precises, is that I didn't have
> ways to test it locally, which made the work painful.

Its possible to test it locally, using opensource software, but its a
painful setup process.

> If there's a way to do it - something that has already being asked -
> it'd be great.
>
> This said, I'm not sure how much time I'll have for this but I'm
> trying to find someone that could help out.
>
> https://review.openstack.org/#/c/144875/30/plugins/xenserver/xenapi/etc/xapi.d/plugins/glance,cm

I offered to help with that at the mid cycle, but I figured it got
fixed already, because no one reached out. Sorry, I mean to reach out
to you to check, but I totally forgot.

I don't have the time now I am afraid. Nikhil or I should be able to
find someone at Rackspace to help out with that. Its important.

>> I suspect looking at how to slowly move towards v2, rather than going
>> for a "big bang" approach, will make this easier to land. That and
>> solving how we implement "changed-since", if thats not available in
>> the newer glance APIs. Honestly, part of me wonders about skipping v2,
>> and going straight to v3.
>
> Regardless, I think we should enable people to run on a v2 only
> deployment. Not a crazy thought, I think. We'll have to think this
> through a bit more.

Agreed its important and needs doing.

Happy to see some signs of progress again.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [global-requirements][pbr] tarball and git requirements no longer supported in requirements.txt

2015-06-04 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 06/03/2015 11:08 PM, Robert Collins wrote:
> Hi, right now there is a little used (e.g. its not in any active 
> project these days) previous feature of pbr/global-requirements:
> we supported things that setuptools does not: to whit, tarball and
> git requirements.
> 
> Now, these things are supported by pip, so the implementation
> involved recursing into pip from our setup.py (setup.py -> pbr ->
> pip). What we exported into setuptools was only the metadata about
> the dependency name. This meant that we were re-entering pip,
> potentially many times - it was, to be blunt, awful.
> 
> Fortunately we removed the recursive re-entry into pip in pbr 1.0. 
> This didn't remove the ability to parse requirements.txt files
> that contain urls, but it does mean they are converted to the
> simple dependency name when doing 'pip install .' in a project tree
> (or pip install $projectname), and so they are effectively
> unversioned - no lower and no upper bound. This works poorly in the
> gate: please don't use tarball or git urls in requirements.txt (or
> setup.cfg for that matter).
> 
> We can still choose to use something from git or a tarball in test 
> jobs, *if* thats the right thing (which it rarely is: I'm just
> being clear that the technical capability still exists)... but it
> needs to be done outside of requirements.txt going forward. Its
> also something that we can support with the new constraints system
> if desired [which will operate globally once in place (it is an
> extension of global-requirements)].
> 
> One question that this raises, and this is why I wrote the email:
> is there any need to support this at all:- can we say that we won't
> use tarball/vcs support at all and block it as a policy step in
> global requirements? AIUI both git and tarball support is
> problematic for CI jobs due to the increased flakiness of depending
> on network resources... so its actively harmful anyway.
> 

Lots of Neutron modules, like advanced services, or out-of-tree
plugins, rely on neutron code being checked out from git [1]. I don't
say it's the way to go forward, and there were plans to stop relying
on latest git to avoid frequent breakages, but it's not yet implemented.

[1]:
http://git.openstack.org/cgit/openstack/neutron-vpnaas/tree/tox.ini#n10

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVcBUnAAoJEC5aWaUY1u57N2EH/jUFd0H9pQ7LApSAIlDTEl2v
WR1EXnc9Vxf5nCWq/qmncj3OCpMDlgL/ZMrFu74LRTDbe38+16kh+Fb+FvBEPGA4
ZkQC3gyg22Se/QcerTxdPil16hnT912Hr3E0cTuu/4ktyipPrVsO39N56Jbrb6WQ
SRCrEohIg7C3c0NgFcvBGh+S4rNf8IKT1oLzKrRhSLzIE8lSeGa1GNnSXPAXk19/
2KIEnqBz3Q5J6umTprB5DFdxMe93Pj6jZmGIMFaHXYgG/yTdKz3zzGM3hpuLyGUQ
kKYEzFJZ4vf2c6NBg//GYTcAkGjkM2QmAnS+uoztU5vm4QRkLgGcDCz29eQ5ufA=
=6bUu
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] New micro-version needed for api bug fix or not?

2015-06-04 Thread Jens Rosenboom
2015-06-03 14:25 GMT+02:00 John Garbutt :
> On 3 June 2015 at 12:52, Jay Pipes  wrote:
>> On 06/03/2015 02:34 AM, Chris Friesen wrote:
>>>
>>> On 06/03/2015 12:16 AM, Jens Rosenboom wrote:
>>>
 I'm wondering though whether the current API behaviour here should be
 changed more generally. Is there a plausible reason to silently
 discard options that are not allowed for non-admins? For me it would
 make more sense to return an error in that case.
>>>
>>>
>>> If we're bumping the microversion anyways, I'd be in favor of having
>>> that throw an error rather than silently ignore options.
>>>
>>> You could maybe even have a helpful "those options require admin
>>> privileges" error message that gets displayed to the user.
>>
>> ++
>
> +1
>
> We must keep adding this sort of validation as we evolve v2.1
>
> This is a one of the big changes in the "default behaviour" since
> v2.0, validate input, and make things discoverable, rather than
> silently fail.

O.k., but can we agree that this will be a second step which can be
handled after the current bugfix-microversion-spec?

IIUC we cannot change the behaviour for the old API anyway, so this
would only affect the remaining admin-only options.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion guideline in API-WG

2015-06-04 Thread Jay Lau
Hi Alex,

Based on my understanding, the Mangum code base is get from Ironic, that's
why Magnum using http headers because when Magnum was created, Ironic is
also using http headers.

Perhaps Magnum can follow the way how Ironic move to use Microversion?

Thanks.



2015-06-04 14:58 GMT+08:00 Xu, Hejie :

>  Hi, guys,
>
> I’m working on adding Microversion into the API-WG’s guideline which make
> sure we have consistent Microversion behavior in the API for user.
> The Nova and Ironic already have Microversion implementation, and as I
> know Magnum *https://review.openstack.org/#/c/184975/*
>  is going to implement
> Microversion also.
>
> Hope all the projects which support( or plan to) Microversion can join the
> review of guideline.
>
> The Mircoversion specification(this almost copy from nova-specs):
> *https://review.openstack.org/#/c/187112*
> 
> And another guideline for when we should bump Mircoversion
> *https://review.openstack.org/#/c/187896/*
> 
>
> As I know, there already have a little different between Nova and Ironic’s
> implementation. Ironic return min/max version when the requested
> version doesn’t support in server by http-headers. There isn’t such thing
> in nova. But that is something for version negotiation we need for nova
> also.
> Sean have pointed out we should use response body instead of http headers,
> the body can includes error message. Really hope ironic team can take a
> look at if you guys have compelling reason for using http headers.
>
> And if we think return body instead of http headers, we probably need
> think about back-compatible also. Because Microversion itself isn’t
> versioned.
> So I think we should keep those header for a while, does make sense?
>
> Hope we have good guideline for Microversion, because we only can change
> Mircoversion itself by back-compatible way.
>
> Thanks
> Alex Xu
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] L3 agent rescheduling issue

2015-06-04 Thread Anna Kamyshnikova
Hi, neutrons!

Some time ago I discovered a bug for l3 agent rescheduling [1]. When there
are a lot of resources and agent_down_time is not big enough neutron-server
starts marking l3 agents as dead. The same issue has been discovered and
fixed for DHCP-agents. I proposed a change similar to those that were done
for DHCP-agents. [2]

There is no unified opinion on this bug and proposed change, so I want to
ask developers whether it worth to continue work on this patch or not.

[1] - https://bugs.launchpad.net/neutron/+bug/1440761
[2] - https://review.openstack.org/171592

-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Does zonemanager support virtual fabric?

2015-06-04 Thread liuxinguo
Hi,

Many FC switchs have the function of virtual fabric and sometimes we need to 
manage the virtual fabric like create zone, delete zone etc.
But it seems like that there is no virtual fabric support in our zonemanager.

So can we manage virtual fabric with zonemaneger, if not, how to manage virtual 
fabric?

Any input will be apprecaite!

Thanks,
Liu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] New micro-version needed for api bug fix or not?

2015-06-04 Thread John Garbutt
On 4 June 2015 at 10:40, Jens Rosenboom  wrote:
> 2015-06-03 14:25 GMT+02:00 John Garbutt :
>> On 3 June 2015 at 12:52, Jay Pipes  wrote:
>>> On 06/03/2015 02:34 AM, Chris Friesen wrote:

 On 06/03/2015 12:16 AM, Jens Rosenboom wrote:

> I'm wondering though whether the current API behaviour here should be
> changed more generally. Is there a plausible reason to silently
> discard options that are not allowed for non-admins? For me it would
> make more sense to return an error in that case.


 If we're bumping the microversion anyways, I'd be in favor of having
 that throw an error rather than silently ignore options.

 You could maybe even have a helpful "those options require admin
 privileges" error message that gets displayed to the user.
>>>
>>> ++
>>
>> +1
>>
>> We must keep adding this sort of validation as we evolve v2.1
>>
>> This is a one of the big changes in the "default behaviour" since
>> v2.0, validate input, and make things discoverable, rather than
>> silently fail.
>
> O.k., but can we agree that this will be a second step which can be
> handled after the current bugfix-microversion-spec?

We can do this in a separate bug fix if you prefer.

> IIUC we cannot change the behaviour for the old API anyway, so this
> would only affect the remaining admin-only options.

Stopping things silently fail, is exactly the kind of change I want to
see us do more of in v2.1.
I see it as a follow up on all the JSON schema validation we have added.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Sean Dague
On 06/03/2015 08:40 PM, Tim Hinrichs wrote:
> As long as there's some way to get the *declarative* policy from the
> system (as a data file or as an API call) that sounds fine.  But I'm
> dubious that it will be easy to keep the API call that returns the
> declarative policy in sync with the actual code that implements that policy.

Um... why? Nova (or any other server project) needs to know what the
currently computed policy is to actually enforce it internally. Turning
around and spitting that back out on the wire is pretty straight forward.

Is there some secret dragon I'm missing here?

-Sean

> 
> Tim
> 
> On Wed, Jun 3, 2015 at 1:53 PM, Doug Hellmann  > wrote:
> 
> Excerpts from Sean Dague's message of 2015-06-03 13:34:11 -0400:
> > On 06/03/2015 12:10 PM, Tim Hinrichs wrote:
> > > I definitely buy the idea of layering policies on top of each other.
> > > But I'd worry about the long-term feasibility of putting default
> > > policies into code mainly because it ensures we'll never be able to
> > > provide any tools that help users (or other services like
> Horizon) know
> > > what the effective policy actually is.  In contrast, if the code
> is just
> > > an implementation of the API, and there is some (or perhaps several)
> > > declarative description(s) of which of those APis are permitted
> to be
> > > executed by whom, we can build tools to analyze those policies.  Two
> > > thoughts.
> > >
> > > 1) If the goal is to provide warnings to the user about
> questionable API
> > > policy choices, I'd suggest adding policy-analysis functionality
> to say
> > > oslo_policy.  The policy-analysis code would take 2 inputs: (i) the
> > > policy and (ii) a list of policy properties, and would generate a
> > > warning if any of the properties are true for the given policy. 
>  Then
> > > each project could provide a file that describes which policy
> properties
> > > are questionable, and anyone wanting to see the warnings run the
> > > functionality on that project's policy and the project's policy
> property
> > > file.
> > >
> > > It would definitely help me if we saw a handful of examples of the
> > > warnings we'd want to generate.
> >
> > WARN: "server create permissions have been restricted from the
> default,
> > this may impede operation and interoperability of your OpenStack
> > installation"
> >
> > > 2) If the goal is to provide sensible defaults so the system
> functions
> > > if there's no policy.json (or a dynamic policy cached from
> Keystone),
> > > why not create a default_policy.json file and use that whenever
> > > policy.json doesn't exist (or more precisely to use policy.json to
> > > override default_policy.json in some reasonable way).
> >
> > Because it's still a file, living in /etc. Files living in etc are
> > things people feel they can modify. They are also things that don't
> > always get deployed correctly with code deploys. People might not
> > realize that default_policy.json is super important to be updated
> every
> > time the code is rolled out.
> 
> It doesn't have to live in /etc, though. It could be packaged in the
> nova code namespace as a data file, and accessed from there.
> 
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][trove] Protected openstack resources

2015-06-04 Thread John Garbutt
On 3 June 2015 at 20:51, Amrith Kumar  wrote:
> Several of us including Bruno Lago, Victoria Martinez de la Cruz (vkmc),
> Flavio Percoco, Nikhil Manchanda (SlickNik), Vipul Sabhaya (vipul), Doug
> Shelley (dougshelley66), and several others from the Trove team met in
> Vancouver and were joined by some others (who I do not know by name) from
> other projects. After the summit, I summarized that meeting in the email
> thread here [1].
>
> In that meeting, and in the course of other conversations, we concluded that
> projects like Trove require the ability to launch instances of resources
> from projects like Nova but bad things happen when a user directly interacts
> with these resources. It would therefore be highly advantageous to have a
> class of instances which are protected from direct user actions.
>
> The “bad things” described above stem from the fact that the guest agent
> that Trove uses is a component that is on the guest instance and it
> communicates with the other Trove controller services over an oslo.messaging
> message queue. If a guest instance were compromised, the fact that it has a
> connection path to the message queue could become a vulnerability. Deployers
> of Trove have addressed these concerns and are able to operate a secure
> Trove system by launching Nova instances in a different tenant than the end
> user. The changes to Trove for this are currently not part of Trove but will
> be made available shortly.

FWIW, this is basically how Glance uses a multi-tenant Swift to store
images from Nova from various tenants.

I think there are more exciting ways that some folks have brewing that
involve some sort of combination of two tenants, or some such.

> Using oslo.messaging for the communication between Trove controller and the
> guest agents allows deployers to choose the underlying AMQP transport.
> However, oslo.messaging is tightly coupled with AMQP semantics. One proposed
> alternative (zaqar) that could address some of Trove’s issues has no
> integration with oslo.messaging.
>
> Therefore, to adopt zaqar, Trove would likely have to abandon oslo.messaging
> and integrate tightly with zaqar which strikes many of us as more
> restrictive and less attractive. I know of at least one user of Trove who
> has deployed oslo.messaging with qpid as the underlying transport, rather
> than the more commonly deployed RabbitMQ.
>
> The request to create an oslo.messaging driver for zaqar (or was it a zaqar
> driver for oslo.messaging) met with some resistance for technical reasons.
> Flavio summarizes it in 2 saying, “This is probably the main reason why
> there's no driver for Zaqar in oslo.messaging. That is, to prevent people
> from actually using Zaqar as a message bus in openstack.”

So you could create a REST API for your agent to talk to. They are
quite well understood, but I have no idea about how your agent talks
to your server, so it could be a terrible idea.

> Other projects like Sahara, and potentially others need a mechanism by which
> to protect their resources from direct manipulation by a user.
>
> Several conversations ensued with members of Nova team and Bruno drafted a
> write-up summarizing some aspects of the problems. To facilitate a quick
> review of this request, the Trove team has put together a document and it is
> available for review at 3.
>
> The request is to have Nova and potentially other OpenStack projects review
> the issues being described. They can then provide protected resources that
> projects like Trove can consume.
>
> Equally, if you work on some other project that could benefit from protected
> resources, please chime in.
>
> Please post comments on the request on the review (4) and register
> blueprints or work towards delivering these capabilities in your respective
> projects. The request is not prescriptive of how projects like Nova should
> implement these capabilities, it merely requests that they be created.

Why is the running your Nova VMs in a trove or sahara specific tenant
not good enough for your use case?

I am not trying to be difficult, I am just curious about what specific
issues something "better" would need to fix.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] [oslo] [cross-project] Dynamic Policy

2015-06-04 Thread Sean Dague
On 06/03/2015 08:46 PM, Hu, David J (Converged Cloud) wrote:
> I am not a big fan of putting admins through a multi-step process.  It looks 
> like admins will need to learn unified policy file first, then 1 or 2 or more 
> releases later, learn about policy in the db.  I understand we are doing 
> things incrementally.  I would prefer that we come up with something or some 
> process that voids the hassle of dealing with unified policy file for admins. 
>  In other words, admins go straight from policy file as is today to policy in 
> the db.

Right, 100% agreed. On something like this it's hugely important not
just to consider the incremental internals that people want to do, but
the incremental changes to long standing practices that cloud admins
have come to know over the years. Dragging ops through a weird and
temporary intermediary step that we know is going away is *terrible*,
and makes them all hate us.

Which is another reason why I think a single policy file is just a non
starter from an actual deploy perspective. It's not a way point from an
op perspective, it's a radical change, that will be radically changed
again in a release or two.

I think an audit tool that slurps up all the policy files from projects
and figures out inconsistencies, and fixes then proposed to the existing
projects, is fine. That preps the policies for dynamic merge at later
dates. Those proposed changes will also trigger deprecation approaches
in the projects. For instance, changing the nova policy from admin ->
compute:admin is not just a switch to be flipped. That's going to
require a deprecation cycle at least so that people can upgrade to
Liberty, admin definitions still do what they expect, but they get
WARNings about it.

Every step here requires "what's going to look different to an operator,
and what change might they need to make after that." and optimize for
that path being as small as possible.

-Sean

> 
> 
> David
> 
> 
> -Original Message-
> From: Adam Young [mailto:ayo...@redhat.com] 
> Sent: Wednesday, June 3, 2015 4:39 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [keystone] [nova] [oslo] [cross-project] Dynamic 
> Policy
> 
> On 06/03/2015 02:55 PM, Sean Dague wrote:
>> On 06/03/2015 02:44 PM, David Chadwick wrote:
>>> In the design that we have been building for a policy administration 
>>> database, we dont require a single policy in order to unify common 
>>> concepts such as hierarchical attributes and roles between the 
>>> different policies of Openstack services. This is because policies 
>>> and hierarchies are held separately and are linked via a many to many 
>>> relationship. My understanding of Adam's primary requirement was that 
>>> a role hierarchy say, should be common across all OpenStack service 
>>> policies, without this necessarily meaning you have to have one huge 
>>> policy. And there is no requirement for Keystone to own all the 
>>> policies. So each service could still own and manage its own policy, 
>>> whilst having attribute hierarchies in common.
>>>
>>> Does this help?
>>>
>>> regards
>>>
>>> David
>> That part makes total sense. What concerned me is there was an 
>> intermediary step that seemed like it was literally *one file* 
>> (https://review.openstack.org/134656). That particular step I think is 
>> unworkable.
> 
> How is this for an approach:
> 
> 1.  Unified policy  file that is just the union of what is in the current 
> projects.  Each project will have a clearly marked section.
> 
> 2.  Split up the main file into sections, one per each project, and put those 
> in separate files.  Build system will concatenate them into a single file.
> 
> 3.  Allow each of the projects to replace their section of the file with file 
> containing just an URL to the upstream git repo that contains their project 
> specific section.  When building the overall unified policy file, those 
> projects that have their own section will get it merged in from their own 
> repos.
> 
> 4.  Eventually, the unified policy file will be expected to be built out of 
> each of the projects git repos.
> 
> I agree with you that we want the projects to manage their own, I just think 
> we need a scrub step where we all look at the individual sections together 
> with a critical eye first.
> 
>>
>> By "common role hierachy" do you mean namespaced roles for services?
>> Because if yes, definitely. And I think that's probably the first
>> concrete step moving the whole thing forward, which should be doable on
>> the existing static json definitions.
>>
>>  -Sean
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Developme

Re: [openstack-dev] [all]Big Tent Mode within respective projects

2015-06-04 Thread John Garbutt
On 3 June 2015 at 13:39, Jay Pipes  wrote:
> On 06/03/2015 08:25 AM, Zhipeng Huang wrote:
>>
>> Hi All,
>>
>> As I understand, Neutron by far has the clearest big tent mode via its
>> in-tree/out-of-tree decomposition, thanks to Kyle and other Neutron team
>> members effort.
>>
>> So my question is, is it the same for the other projects? For example,
>> does Nova also have the project-level Big Tent Mode Neutron has?
>
>
> Hi Zhipeng,
>
> At this time, Neutron is the only project that has done any splitting out of
> driver and advanced services repos. Other projects have discussed doing
> this, but, at least in Nova, that discussion was put on hold for the time
> being. Last I remember, we agreed that we would clean up, stabilize and
> document the virt driver API in Nova before any splitting of driver repos
> would be feasible.

+1 to jay's comment.

I see Nova's mission as providing a solid interoperable API experience
to on-demand compute resources. Right now, thats happening best by
keeping things in tree, but we are doing work to make other options
possible.

I actually see the existence of projects such as Cinder, Heat and
Magnum as success stories born out of Nova saying no to expanding our
scope (and in the case of Cinder, actively trying to reduce our
scope). I hope more of both of those things will happen in the future.

If we had accepted these efforts into Nova, they would not have had
the freedom they get by living inside OpenStack, but outside of
Compute. Something the big tent makes much easier to deal with. I
don't think they would have gained much by being inside the compute
project, mostly because we are all crazy busy looking after Nova.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Scaling out Nova (and the code review process)

2015-06-04 Thread John Garbutt
This was sent in another thread, but its a quick read out form one the
Nova sessions at the summit that we might want a Nova specific
discussion on...

-- Forwarded message --
From: John Garbutt 
Date: 3 June 2015 at 17:57
Subject: Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code
review process (subdir cores)

+1 to ttx and Jame's points on trust and relationships, indeed
referencing the summit session that ttx mentioned:
https://etherpad.openstack.org/p/liberty-cross-project-in-team-scaling

On 3 June 2015 at 16:01, Nikola Đipanov  wrote:

> This was one of the arguments against doing exactly what you propose in
> Nova - we want the same (high?) level of reviews in all parts of the
> code, and strong familiarity with the whole.
>
> But I think it's failing - Nova is just too big - and there is not
> enough skilled people to do the work without a massive scope reduction.
>
> I am not sure how to fix it TBH (tho my gut feeling says we should
> loosen not tighten the constraints).

So we did have a session on this a the summit:
https://etherpad.openstack.org/p/YVR-nova-liberty-process

I am yet to properly write that up. We don't have a solution, but we
do now have a plan to try and improve things, and it looks something
like this, in no particular order...

Step A is having better developer documentation around the reasons
behind the big strokes of our architecture. This tribal knowledge is
hard to get, and frankly I don't think a single person currently has
the full context here. I am looking for a better something, rather
than an out of date nothing. Perfection here is costly and largely
pointless.
https://blueprints.launchpad.net/nova/+spec/devref-refresh-liberty

Step B is having better / more explicit mentoring for new
contributors, and helping them get started. This includes both
reviewers and those submitting code / features. This effort will help
us improve the artefacts generated by Step 1, so its more useful for
folks gaining context. This used to happen all the time, I feel we got
out of the habit.
https://wiki.openstack.org/wiki/Nova/Mentoring

Step C do a better job of describing WHY we do what we are doing.
Firstly so people know why we do what appear to be crazy things. For
example, Nova might be there first experience of OpenStack, or any
sort of open source development. Secondly, so we can have better
informed conversations about how to improve how we work.

Step D allow self organising sub-teams are now free to decide on top
few patches that are considered "ready to merge" by that subteam and
thus overdue in terms of "ready for nova-core review". Thats happening
in this etherpad:
https://etherpad.openstack.org/p/liberty-nova-priorities-tracking
Increased nova-core focus (as seen during feature freeze) tends to
help things get through the system quickly. Over time, its possible a
subteam might gain enough trust, that we treat their recommendation as
a +2. But we did agree to leave that on the back burner for liberty-1
and see how things progress throughout the liberty release. Its hopped
the addition of tagging in Gerrit might eventually remove the need for
the etherpad.

Step E continue to halt Nova scope creep, and ideally reduce the
scope. The first part is by starting to record what the "tribal
knowledge" says the scope is:
http://docs.openstack.org/developer/nova/devref/project_scope.html
The second part is better seams and contracts between the different
components in Nova. One of the aims of that being that it will be
easier for different groups to work more independently. Much of the
scheduler work is looking to create a more stable interface so its
possible the scheduler (and maybe resource tracker) can gain more
autonomy. The cells v2 work is helping evolve the API <-> compute
interface. The tasks work is likely to help a major evolution of the
virt driver interface. The virt driver interface is also being
improved by the objects work, including formalising image properties
and flavor extra specs. While it may never be practical for any or all
of these things to move out of Nova git tree, looser coupling through
some stricter contracts will help.

Step F review the impact the above ideas are having, and decide where
we go next.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Liberty Priorties for Nova

2015-06-04 Thread John Garbutt
Hi,

We had a great discussion at the summit around priorities:
https://etherpad.openstack.org/p/YVR-nova-liberty-priorities

I have made a stab at writing that up here, please to review if you
are interested:
https://review.openstack.org/#/c/187272/

Note we plan to keep focus on the reviews using the etherpad like we
did in kilo:
https://etherpad.openstack.org/p/liberty-nova-priorities-tracking

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Liberty Process, Deadlines and Dates for Nova

2015-06-04 Thread John Garbutt
Hi,

Following up from nova-meetings and this summit session:
https://etherpad.openstack.org/p/YVR-nova-liberty-process

We agreed we follow the same deadlines as kilo, roughly. Importantly,
we wanted to keep the blueprint freeze, and non-priority feature
freeze process from kilo.

This means we align with:
https://wiki.openstack.org/wiki/Liberty_Release_Schedule

Adding in priority specs, this gives us:
June 23-25: liberty-1 -- spec freeze for L, backlog stays open
July 21-23: mid-cycle meetup
July 28-30: liberty-2 -- non-priority feature freeze
September 1-3: liberty-3 -- align with string freeze, etc, open specs for M

Now in the past a focused day of effort has really helped. We all hang
out in #openstack-nova as usual, but we all try to focus on something
specific that needed really soon.

With that in mind, does this seem like a good idea?
June 12: spec review day (to be confirmed)
July 10: feature review bash day (to be confirmed)
August 7: bug triage day (to be confirmed)
September 11: bug review bash day (to be confirmed)

I am adding more details here:
https://wiki.openstack.org/wiki/Nova/Liberty_Release_Schedule

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all]Big Tent Mode within respective projects

2015-06-04 Thread Boris Pavlovic
Jay,


> At this time, Neutron is the only project that has done any splitting out
> of driver and advanced services repos. Other projects have discussed doing
> this, but, at least in Nova, that discussion was put on hold for the time
> being. Last I remember, we agreed that we would clean up, stabilize and
> document the virt driver API in Nova before any splitting of driver repos
> would be feasible.


Imho not only Neutron has this. ;)
Rally support out of tree plugins as well and I saw already some third
party repos:
https://github.com/stackforge/haos

Best regards,
Boris Pavlovic

On Thu, Jun 4, 2015 at 2:08 PM, John Garbutt  wrote:

> On 3 June 2015 at 13:39, Jay Pipes  wrote:
> > On 06/03/2015 08:25 AM, Zhipeng Huang wrote:
> >>
> >> Hi All,
> >>
> >> As I understand, Neutron by far has the clearest big tent mode via its
> >> in-tree/out-of-tree decomposition, thanks to Kyle and other Neutron team
> >> members effort.
> >>
> >> So my question is, is it the same for the other projects? For example,
> >> does Nova also have the project-level Big Tent Mode Neutron has?
> >
> >
> > Hi Zhipeng,
> >
> > At this time, Neutron is the only project that has done any splitting
> out of
> > driver and advanced services repos. Other projects have discussed doing
> > this, but, at least in Nova, that discussion was put on hold for the time
> > being. Last I remember, we agreed that we would clean up, stabilize and
> > document the virt driver API in Nova before any splitting of driver repos
> > would be feasible.
>
> +1 to jay's comment.
>
> I see Nova's mission as providing a solid interoperable API experience
> to on-demand compute resources. Right now, thats happening best by
> keeping things in tree, but we are doing work to make other options
> possible.
>
> I actually see the existence of projects such as Cinder, Heat and
> Magnum as success stories born out of Nova saying no to expanding our
> scope (and in the case of Cinder, actively trying to reduce our
> scope). I hope more of both of those things will happen in the future.
>
> If we had accepted these efforts into Nova, they would not have had
> the freedom they get by living inside OpenStack, but outside of
> Compute. Something the big tent makes much easier to deal with. I
> don't think they would have gained much by being inside the compute
> project, mostly because we are all crazy busy looking after Nova.
>
> Thanks,
> John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Voting on the Nova project meeting times

2015-06-04 Thread John Garbutt
Hi,

We have a regular Nova project meeting with alternating times, as
described here:
https://wiki.openstack.org/wiki/Meetings/Nova

I will lean towards no change of the times, given we are used to them.
But I want to double check there if there is a group of contributors
we are accidentally excluding due to the times we have picked.

Please vote here:
http://doodle.com/eyzvnawzv86ubtaw

If doodle doesn't work in your country, or whatever, do email me
directly and I can add in your vote. I could have used something
better, but doodle was easy.

Thanks,
John

PS
Its possible the majority choice is the wrong choice for complex
reasons. This is just the best way I could think of to get some quick
feedback on the times we are currently using, and possible
alternatives. Thats just a head up.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Liberty Priorties for Nova

2015-06-04 Thread Neil Jerram

Hi John,

On 04/06/15 12:21, John Garbutt wrote:

Hi,

We had a great discussion at the summit around priorities:
https://etherpad.openstack.org/p/YVR-nova-liberty-priorities

I have made a stab at writing that up here, please to review if you
are interested:
https://review.openstack.org/#/c/187272/

Note we plan to keep focus on the reviews using the etherpad like we
did in kilo:
https://etherpad.openstack.org/p/liberty-nova-priorities-tracking


As you may have seen, I've collated libvirt/VIF type work at 
https://etherpad.openstack.org/p/liberty-nova-libvirt-vif-work.  Where 
would you prefer any discussion about that to continue?  Here on the ML, 
or in that review job?


Thanks,
Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] novnc console connection timetout

2015-06-04 Thread 张扬
Hi Guys,

I am a newbie on openstack, I deployed my first openstack env via Fuel 6.0,
but unfortunately I always could not access the vm's vnc console, it hints
the connection time, I also found the following calltrace in
nova-novncproxy.log,  Could anybody give me a hint on it? I am not sure
whether its appropriate to send them to the mail list nor not, if not,
please help me forward it to the right mail list. Thanks in advance! :-).

BTW: I also tried to change the def_con_timeout, but it did not work for me
The following is about the log information.

controller: nova.conf

root@node-9:~# cat /etc/nova/nova.conf
[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute
flat_interface=eth2.103
#debug=False
debug=True
log_dir=/var/log/nova
network_manager=nova.network.manager.FlatDHCPManager
amqp_durable_queues=False
rabbit_hosts=10.0.21.5:5672
quota_volumes=100
notify_api_faults=False
flat_network_bridge=br100
resume_guests_state_on_host_boot=True
memcached_servers=10.0.21.5:11211
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
rabbit_use_ssl=False
quota_ram=51200
notification_driver=messaging
max_io_ops_per_host=8
quota_max_injected_file_content_bytes=102400
s3_listen=0.0.0.0
quota_driver=nova.quota.NoopQuotaDriver
glance_api_servers=192.168.1.103:9292
max_age=0
quota_security_groups=10
novncproxy_host=192.168.1.103
rabbit_userid=nova
rabbit_ha_queues=True
rabbit_password=FMskSLdn
report_interval=10
scheduler_weight_classes=nova.scheduler.weights.all_weighers
quota_cores=100
reservation_expire=86400
rabbit_virtual_host=/
force_snat_range=0.0.0.0/0
image_service=nova.image.glance.GlanceImageService
use_cow_images=True
quota_max_injected_files=50
notify_on_state_change=vm_and_task_state
scheduler_host_subset_size=30
novncproxy_port=6080
ram_allocation_ratio=1.0
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
quota_security_group_rules=20
disk_allocation_ratio=1.0
quota_max_injected_file_path_bytes=4096
quota_floating_ips=100
quota_key_pairs=10
scheduler_max_attempts=3
cpu_allocation_ratio=8.0
multi_host=True
max_instances_per_host=50
scheduler_available_filters=nova.scheduler.filters.all_filters
public_interface=eth1
service_down_time=60
syslog_log_facility=LOG_LOCAL6
quota_gigabytes=1000
use_syslog_rfc_format=True
quota_instances=100
scheduler_host_manager=nova.scheduler.host_manager.HostManager
notification_topics=notifications
osapi_compute_listen=0.0.0.0
ec2_listen=0.0.0.0
volume_api_class=nova.volume.cinder.API
service_neutron_metadata_proxy=False
use_forwarded_for=False
osapi_volume_listen=0.0.0.0
metadata_listen=0.0.0.0
auth_strategy=keystone
ram_weight_multiplier=1.0
keystone_ec2_url=http://10.0.21.5:5000/v2.0/ec2tokens
quota_metadata_items=1024
osapi_compute_workers=8
rootwrap_config=/etc/nova/rootwrap.conf
rpc_backend=nova.openstack.common.rpc.impl_kombu
fixed_range=10.0.23.0/24
use_syslog=True
metadata_workers=8
dhcp_domain=novalocal
allow_resize_to_same_host=True
flat_injected=False

[DATABASE]
max_pool_size=30
max_retries=-1
max_overflow=40

[database]
idle_timeout=3600
connection=mysql://nova:LcHgm0PN@127.0.0.1/nova?read_timeout=60

[keystone_authtoken]
signing_dirname=/tmp/keystone-signing-nova
signing_dir=/tmp/keystone-signing-nova
auth_port=35357
admin_password=FMxM1wqW
admin_user=nova
auth_protocol=http
auth_host=10.0.21.5
admin_tenant_name=services
auth_uri=http://10.0.21.5:5000/

[conductor]
workers=8

===
compute node: nova.conf

root@node-8:~# cat /etc/nova/nova.conf
[DEFAULT]
notification_driver=ceilometer.compute.nova_notifier
notification_driver=nova.openstack.common.notifier.rpc_notifier
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=metadata
flat_interface=eth2.103
debug=False
log_dir=/var/log/nova
network_manager=nova.network.manager.FlatDHCPManager
amqp_durable_queues=False
vncserver_proxyclient_address=10.0.21.4
rabbit_hosts=10.0.21.5:5672
notify_api_faults=False
flat_network_bridge=br100
memcached_servers=127.0.0.1:11211
rabbit_use_ssl=False
notifica

[openstack-dev] [Ironic] ENROLL state and changing node driver

2015-06-04 Thread Dmitry Tantsur

Hi!

While working on the enroll spec [1], I got a thinking: within the new 
state machine, when should we allow to change a node driver?


My initial idea was to only allow driver change in ENROLL. Which sounds 
good to me, but then it will be impossible to change a driver after 
moving forward: we don't plan on having a way back to ENROLL from 
MANAGEABLE.


What do you folks think we should do:
1. Leave driver field as it was before
2. Allow changing driver in ENROLL, do not allow later
3. Allow changing driver in ENROLL only, but create a way back from 
MANAGEABLE to ENROLL ("unmanage"??)


Cheers,
Dmitry

[1] https://review.openstack.org/#/c/179151

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][trove] Protected openstack resources

2015-06-04 Thread Amrith Kumar
John,

Thanks for your note. I've updated the review at 
https://review.openstack.org/#/c/186357/ with answers to some of your questions 
(and I added you to that review).

Trove's use-case like some of the other projects listed is different from 
Glance in that Trove has a guest agent. I've tried to explain that in more 
detail in patch set 5. I'd appreciate your comments.

Thanks,

-amrith

| -Original Message-
| From: John Garbutt [mailto:j...@johngarbutt.com]
| Sent: Thursday, June 04, 2015 6:54 AM
| To: OpenStack Development Mailing List (not for usage questions)
| Subject: Re: [openstack-dev] [nova][trove] Protected openstack resources
| 
| On 3 June 2015 at 20:51, Amrith Kumar  wrote:
| > Several of us including Bruno Lago, Victoria Martinez de la Cruz
| > (vkmc), Flavio Percoco, Nikhil Manchanda (SlickNik), Vipul Sabhaya
| > (vipul), Doug Shelley (dougshelley66), and several others from the
| > Trove team met in Vancouver and were joined by some others (who I do
| > not know by name) from other projects. After the summit, I summarized
| > that meeting in the email thread here [1].
| >
| > In that meeting, and in the course of other conversations, we
| > concluded that projects like Trove require the ability to launch
| > instances of resources from projects like Nova but bad things happen
| > when a user directly interacts with these resources. It would
| > therefore be highly advantageous to have a class of instances which are
| protected from direct user actions.
| >
| > The “bad things” described above stem from the fact that the guest
| > agent that Trove uses is a component that is on the guest instance and
| > it communicates with the other Trove controller services over an
| > oslo.messaging message queue. If a guest instance were compromised,
| > the fact that it has a connection path to the message queue could
| > become a vulnerability. Deployers of Trove have addressed these
| > concerns and are able to operate a secure Trove system by launching
| > Nova instances in a different tenant than the end user. The changes to
| > Trove for this are currently not part of Trove but will be made
| available shortly.
| 
| FWIW, this is basically how Glance uses a multi-tenant Swift to store
| images from Nova from various tenants.
| 
| I think there are more exciting ways that some folks have brewing that
| involve some sort of combination of two tenants, or some such.
| 
| > Using oslo.messaging for the communication between Trove controller
| > and the guest agents allows deployers to choose the underlying AMQP
| transport.
| > However, oslo.messaging is tightly coupled with AMQP semantics. One
| > proposed alternative (zaqar) that could address some of Trove’s issues
| > has no integration with oslo.messaging.
| >
| > Therefore, to adopt zaqar, Trove would likely have to abandon
| > oslo.messaging and integrate tightly with zaqar which strikes many of
| > us as more restrictive and less attractive. I know of at least one
| > user of Trove who has deployed oslo.messaging with qpid as the
| > underlying transport, rather than the more commonly deployed RabbitMQ.
| >
| > The request to create an oslo.messaging driver for zaqar (or was it a
| > zaqar driver for oslo.messaging) met with some resistance for technical
| reasons.
| > Flavio summarizes it in 2 saying, “This is probably the main reason
| > why there's no driver for Zaqar in oslo.messaging. That is, to prevent
| > people from actually using Zaqar as a message bus in openstack.”
| 
| So you could create a REST API for your agent to talk to. They are quite
| well understood, but I have no idea about how your agent talks to your
| server, so it could be a terrible idea.
| 
| > Other projects like Sahara, and potentially others need a mechanism by
| > which to protect their resources from direct manipulation by a user.
| >
| > Several conversations ensued with members of Nova team and Bruno
| > drafted a write-up summarizing some aspects of the problems. To
| > facilitate a quick review of this request, the Trove team has put
| > together a document and it is available for review at 3.
| >
| > The request is to have Nova and potentially other OpenStack projects
| > review the issues being described. They can then provide protected
| > resources that projects like Trove can consume.
| >
| > Equally, if you work on some other project that could benefit from
| > protected resources, please chime in.
| >
| > Please post comments on the request on the review (4) and register
| > blueprints or work towards delivering these capabilities in your
| > respective projects. The request is not prescriptive of how projects
| > like Nova should implement these capabilities, it merely requests that
| they be created.
| 
| Why is the running your Nova VMs in a trove or sahara specific tenant not
| good enough for your use case?
| 
| I am not trying to be difficult, I am just curious about what specific
| issues something "better" would need to fix.
| 

Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Adam Young

On 06/04/2015 06:32 AM, Sean Dague wrote:

On 06/03/2015 08:40 PM, Tim Hinrichs wrote:

As long as there's some way to get the *declarative* policy from the
system (as a data file or as an API call) that sounds fine.  But I'm
dubious that it will be easy to keep the API call that returns the
declarative policy in sync with the actual code that implements that policy.

Um... why? Nova (or any other server project) needs to know what the
currently computed policy is to actually enforce it internally. Turning
around and spitting that back out on the wire is pretty straight forward.

Is there some secret dragon I'm missing here?


No.  But it is a significant bit of coding to do;  you would need to 
crawl every API and make sure you hit every code path that could enforce 
policy.  However, I've contemplated doing something like that with 
oslo.policy already;  run a workload through a server with policy 
non-enforcing (Permissive mode) and log the output to a file, then use 
that output to modify either the policy or the delegations (role 
assignments or trusts) used in a workflow.


The Hard coded defaults worry me, though.  Nova is one piece (a big one, 
admittedly) of a delicate dance across multiple (not-so-micro) services 
that make up OpenStack.  Other serivces are going to take their cue from 
what Nova does, and that would make the overall flow that much harder to 
maintain.


I think we need to break some very ingrained patterns in out policy 
enforcement.  I would worry that enforcing policy in code would give us 
something that we could not work around.  Instead, I think we need to 
ensure that the  Nova team leads the rest of the OpenStack core services 
in setting up best practices, and that is primarily a communication 
issue.  Getting to a common understanding of RBAC, and making it clear 
how roles are modified on a per-api basis will make Nova more robust.




-Sean


Tim

On Wed, Jun 3, 2015 at 1:53 PM, Doug Hellmann mailto:d...@doughellmann.com>> wrote:

 Excerpts from Sean Dague's message of 2015-06-03 13:34:11 -0400:
 > On 06/03/2015 12:10 PM, Tim Hinrichs wrote:
 > > I definitely buy the idea of layering policies on top of each other.
 > > But I'd worry about the long-term feasibility of putting default
 > > policies into code mainly because it ensures we'll never be able to
 > > provide any tools that help users (or other services like
 Horizon) know
 > > what the effective policy actually is.  In contrast, if the code
 is just
 > > an implementation of the API, and there is some (or perhaps several)
 > > declarative description(s) of which of those APis are permitted
 to be
 > > executed by whom, we can build tools to analyze those policies.  Two
 > > thoughts.
 > >
 > > 1) If the goal is to provide warnings to the user about
 questionable API
 > > policy choices, I'd suggest adding policy-analysis functionality
 to say
 > > oslo_policy.  The policy-analysis code would take 2 inputs: (i) the
 > > policy and (ii) a list of policy properties, and would generate a
 > > warning if any of the properties are true for the given policy.
  Then
 > > each project could provide a file that describes which policy
 properties
 > > are questionable, and anyone wanting to see the warnings run the
 > > functionality on that project's policy and the project's policy
 property
 > > file.
 > >
 > > It would definitely help me if we saw a handful of examples of the
 > > warnings we'd want to generate.
 >
 > WARN: "server create permissions have been restricted from the
 default,
 > this may impede operation and interoperability of your OpenStack
 > installation"
 >
 > > 2) If the goal is to provide sensible defaults so the system
 functions
 > > if there's no policy.json (or a dynamic policy cached from
 Keystone),
 > > why not create a default_policy.json file and use that whenever
 > > policy.json doesn't exist (or more precisely to use policy.json to
 > > override default_policy.json in some reasonable way).
 >
 > Because it's still a file, living in /etc. Files living in etc are
 > things people feel they can modify. They are also things that don't
 > always get deployed correctly with code deploys. People might not
 > realize that default_policy.json is super important to be updated
 every
 > time the code is rolled out.

 It doesn't have to live in /etc, though. It could be packaged in the
 nova code namespace as a data file, and accessed from there.

 Doug

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://list

Re: [openstack-dev] [keystone][barbican] Regarding exposing X-Group-xxxx in token validation

2015-06-04 Thread Fox, Kevin M
Some kind of intermediate mapping might be better. With ldap, I dont have 
control over the groups users are assigned since thats an enterprise/AD thing. 
There can be a lot of them. Groups to Role relations I guess do that mapping. 
Though maybe passing groups directly when domains can have different group 
meanings might be a big problem.

Does federation have a way to map a federated group to a local group somehow?

Thanks,
Kevin


From: Steve Martinelli
Sent: Wednesday, June 03, 2015 8:19:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
X-Group- in token validation

Dozens to hundreds of roles or endpoints could cause an issue now :)

But yeah, groups are much more likely to number in the dozens than roles or 
endpoints. But I think the Fernet token size is so small that it could probably 
handle this (since it does so now for the federated workflow).

Thanks,

Steve Martinelli
OpenStack Keystone Core



From:"Fox, Kevin M" 
To:"OpenStack Development Mailing List (not for usage questions)" 

Date:06/03/2015 11:14 PM
Subject:Re: [openstack-dev] [keystone][barbican] Regarding
exposingX-Group- in token validation




Will dozens to a hundred groups or so on one user cause issues? :)

Thanks,
Kevin


From: Morgan Fainberg
Sent: Wednesday, June 03, 2015 7:23:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
X-Group- in token validation

In general I am of the opinion with the move to Fernet there is no good reason 
we should avoid adding the group information into the token.

--Morgan

Sent via mobile

On Jun 3, 2015, at 18:44, Dolph Mathews 
mailto:dolph.math...@gmail.com>> wrote:


On Wed, Jun 3, 2015 at 5:58 PM, John Wood 
mailto:john.w...@rackspace.com>> wrote:
Hello folks,

There has been discussion about adding user group support to the per-secret 
access control list (ACL) feature in Barbican. Hence secrets could be marked as 
accessible by a group on the ACL rather than an individual user as implemented 
now.

Our understanding is that Keystone does not pass along a user’s group 
information during token validation however (such as in the form of 
X-Group-Ids/X-Group-Names headers passed along via Keystone middleware).

The pre-requisite for including that information in the form of headers would 
be adding group information to the token validation response. In the case of 
UUID, it would be pre-computed and stored in the DB at token creation time. In 
the case of PKI, it would be encoded into the PKI token and further bloat PKI 
tokens. And in the case of Fernet, it would be included at token validation 
time.

Including group information, however, would also let us efficient revoke tokens 
using token revocation events when group membership is affected in any way 
(user being removed from a group, a group being deleted, or a group-based role 
assignment being revoked). The OS-FEDERATION extension is actually already 
including groups in tokens today, as a required part of the federated workflow. 
We'd effectively be introducing that same behavior into the core Identity API 
(see the federated token example):

  
https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3-os-federation-ext.rst#request-an-unscoped-os-federation-token

This would allow us to address bugs such as:

  https://bugs.launchpad.net/keystone/+bug/1268751

In the past, we shied away from including groups if only to avoid bloating the 
size of PKI tokens any further (but now we have Fernet tokens providing a 
viable alternative). Are there any other reasons not to add group information 
to the token validation response?


Would the community consider this a useful feature? Would the community 
consider adding this support to Liberty?

Thank you,
John


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/

Re: [openstack-dev] [keystone][barbican] Regarding exposing X-Group-xxxx in token validation

2015-06-04 Thread Dolph Mathews
To clarify: we already have to include the groups produced as a result of
federation mapping **in the payload** of Fernet tokens so that scoped
tokens can be created later:


https://github.com/openstack/keystone/blob/a637ebcbc4a92687d3e80a50cbe88df3b13c79e6/keystone/token/providers/fernet/token_formatters.py#L523

These are OpenStack group IDs, so it's up to the deployer to keep those
under control to keep Fernet token sizes down. It's the only place in the
current Fernet implementation that's (somewhat alarmingly) unbounded in the
real world.

But we do **not** have a use case to add groups to *all* Fernet payloads:
only to token creation & validation responses.

On Thu, Jun 4, 2015 at 2:36 AM, Morgan Fainberg 
wrote:

> For Fernet, the groups would only be populated on validate as Dolph
> outlined. They would not be added to the core payload. We do not want to
> expand the payload in this manner.
>
> --Morgan
>
> Sent via mobile
>
> On Jun 3, 2015, at 21:51, Lance Bragstad  wrote:
>
> I feel if we allowed group ids to be an attribute of the Fernet's core
> payload, we continue to open up the possibility for tokens to be greater
> than the initial "acceptable" size limit for a Fernet token (which I
> believe was 255 bytes?). With this, I think we need to provide guidance on
> the number of group ids allowed within the token before that size limit is
> compromised.
>
> We've landed patches recently that allow for id strings to be included in
> the Fernet payload [0], regardless of being uuid format (which can be
> converted to bytes before packing to save space, this is harder for us to
> do with non-uuid format id strings). This can also cause the Fernet token
> size to grow. If we plan to include more information in the Fernet token
> payload I think we should determine if the original acceptable size limit
> still applies and regardless of what that size limit is provide some sort
> of "best practices" for helping deployments keep their token size as small
> as possible.
>
>
> Keeping the tokens user (and developer) friendly was a big plus in the
> design of Fernet, and providing resource for deployments to maintain that
> would be helpful.
>
>
> [0]
> https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:bug/1459382,n,z
>
> On Wed, Jun 3, 2015 at 10:19 PM, Steve Martinelli 
> wrote:
>
>> Dozens to hundreds of roles or endpoints could cause an issue now :)
>>
>> But yeah, groups are much more likely to number in the dozens than roles
>> or endpoints. But I think the Fernet token size is so small that it could
>> probably handle this (since it does so now for the federated workflow).
>>
>> Thanks,
>>
>> Steve Martinelli
>> OpenStack Keystone Core
>>
>>
>>
>> From:"Fox, Kevin M" 
>> To:"OpenStack Development Mailing List (not for usage
>> questions)" 
>> Date:06/03/2015 11:14 PM
>> Subject:Re: [openstack-dev] [keystone][barbican] Regarding
>>  exposingX-Group- in token validation
>> --
>>
>>
>>
>> Will dozens to a hundred groups or so on one user cause issues? :)
>>
>> Thanks,
>> Kevin
>>
>> --
>> *From:* Morgan Fainberg
>> * Sent:* Wednesday, June 03, 2015 7:23:22 PM
>> * To:* OpenStack Development Mailing List (not for usage questions)
>> * Subject:* Re: [openstack-dev] [keystone][barbican] Regarding exposing
>> X-Group- in token validation
>>
>> In general I am of the opinion with the move to Fernet there is no good
>> reason we should avoid adding the group information into the token.
>>
>> --Morgan
>>
>> Sent via mobile
>>
>> On Jun 3, 2015, at 18:44, Dolph Mathews <*dolph.math...@gmail.com*
>> > wrote:
>>
>>
>> On Wed, Jun 3, 2015 at 5:58 PM, John Wood <*john.w...@rackspace.com*
>> > wrote:
>> Hello folks,
>>
>> There has been discussion about adding user group support to the
>> per-secret access control list (ACL) feature in Barbican. Hence secrets
>> could be marked as accessible by a group on the ACL rather than an
>> individual user as implemented now.
>>
>> Our understanding is that Keystone does not pass along a user’s group
>> information during token validation however (such as in the form of
>> X-Group-Ids/X-Group-Names headers passed along via Keystone middleware).
>>
>> The pre-requisite for including that information in the form of headers
>> would be adding group information to the token validation response. In the
>> case of UUID, it would be pre-computed and stored in the DB at token
>> creation time. In the case of PKI, it would be encoded into the PKI token
>> and further bloat PKI tokens. And in the case of Fernet, it would be
>> included at token validation time.
>>
>> Including group information, however, would also let us efficient revoke
>> tokens using token revocation events when group membership is affected in
>> any way (user being removed from a group, a group being deleted, or a
>> group-based role assignment being revoked). The 

Re: [openstack-dev] [neutron] Proposing Assaf Muller for the Neutron Core Reviewer Team

2015-06-04 Thread Kyle Mestery
It's a been a week since I proposed this, with no objections. Welcome to
the Neutron core reviewer team as the new QA Lieutenant Assaf!

On Tue, Jun 2, 2015 at 12:35 PM, Maru Newby  wrote:

> +1 from me, long overdue!
>
>
> > On May 28, 2015, at 9:42 AM, Kyle Mestery  wrote:
> >
> > Folks, I'd like to propose Assaf Muller to be a member of the Neutron
> core reviewer team. Assaf has been a long time contributor in Neutron, and
> he's also recently become my testing Lieutenant. His influence and
> knowledge in testing will be critical to the team in Liberty and beyond. In
> addition to that, he's done some fabulous work for Neutron around L3 HA and
> DVR. Assaf has become a trusted member of our community. His review stats
> place him in the pack with the rest of the Neutron core reviewers.
> >
> > I'd also like to take this time to remind everyone that reviewing code
> is a responsibility, in Neutron the same as other projects. And core
> reviewers are especially beholden to this responsibility. I'd also like to
> point out that +1/-1 reviews are very useful, and I encourage everyone to
> continue reviewing code even if you are not a core reviewer.
> >
> > Existing Neutron cores, please vote +1/-1 for the addition of Assaf to
> the core reviewer team.
> >
> > Thanks!
> > Kyle
> >
> > [1] http://stackalytics.com/report/contribution/neutron-group/180
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-04 Thread Serg Melikyan
Hi Vahid,

Your analysis is correct, and integration of heat-translator is as
simple as you described that in your document. It would be really
awesome if you would turn this PDF to the proper specification for the
blueprint.

P.S. Regarding several stack for applications - currently HOT-based
packages create stack per application, and we don't support same level
of composition as we have in murano-pl based packages. This is another
question for improvement.

On Wed, Jun 3, 2015 at 12:44 AM, Georgy Okrokvertskhov
 wrote:
> Hi Vahid,
>
> Thank you for sharing your thoughts.
> I have a questions about application life-cycle if we use TOSCA translator.
> In Murano the main advantage of using HOT format is that we can update Het
> stack with resources as soon as we need to deploy additional application. We
> can dynamically create multi-tier applications with using other apps as a
> building blocks. Imagine Java app on tom of Tomcat (VM1) and PostgreDB
> (VM2).  All three components are three different apps in  the catalog.
> Murano allows you to bring them and deploy together.
>
> Do you think it will be possible to use TOSCA translator for Heat stack
> updates? What we will do if we have two apps with two TOSCA templates like
> Tomcat and Postgre. How we can combine them together?
>
> Thanks
> Gosha
>
> On Tue, Jun 2, 2015 at 12:14 PM, Vahid S Hashemian
>  wrote:
>>
>> This is my what I have so far.
>>
>>
>>
>> Would love to hear feedback on it. Thanks.
>>
>> Regards,
>>
>> -
>> Vahid Hashemian, Ph.D.
>> Advisory Software Engineer, IBM Cloud Labs
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Georgy Okrokvertskhov
> Architect,
> OpenStack Platform Products,
> Mirantis
> http://www.mirantis.com
> Tel. +1 650 963 9828
> Mob. +1 650 996 3284
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Does zonemanager support virtual fabric?

2015-06-04 Thread yang, xing
Here's a cinder spec that proposes to add support for Brocade Virtual Fabrics 
support:

https://review.openstack.org/#/c/144389/

Cisco already has vSAN support.

Thanks,
Xing


From: liuxinguo [mailto:liuxin...@huawei.com]
Sent: Thursday, June 04, 2015 6:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: zhangke (O); Luozhen
Subject: [openstack-dev] [cinder] Does zonemanager support virtual fabric?

Hi,

Many FC switchs have the function of virtual fabric and sometimes we need to 
manage the virtual fabric like create zone, delete zone etc.
But it seems like that there is no virtual fabric support in our zonemanager.

So can we manage virtual fabric with zonemaneger, if not, how to manage virtual 
fabric?

Any input will be apprecaite!

Thanks,
Liu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Time to decide something on the vendor tools repo

2015-06-04 Thread Dmitry Tantsur

Hi again!

~ half an hour has passed since my last email, and now I have one more 
question to discuss and decide!


On the summit we were discussing things like chassis discovery, and 
arrived at rough conclusion that we want it to be somewhere in a 
separate repo. More precisely, we wanted some place for vendor to 
contribute code (aka scripts) that aren't good fit for both standard 
interfaces and existing vendor passthrough (chassis discovery again is a 
good example).


I suggest to decide something finally to unblock people. A few questions 
follow:


Should we
1. create one repo for all vendors (say, ironic-contrib-tools)
2. create a repo for every vendor appearing
3. ask vendors to go for stackforge, at least until their solution 
shapes (like we did with inspector)?

4. %(your_variant)s

If we go down 1-2 route, should
1. ironic-core team own the new repo(s)?
2. or should we form a new team from interested people?
(1 and 2 and not exclusive actually).

I personally would go for #3 - stackforge. We already have e.g. 
stackforge/proliantutils as an example of something closely related to 
Ironic, but still independent.


I'm also fine with #1#1 (one repo, owned by group of interested people).

What do you think?

Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][barbican] Regarding exposing X-Group-xxxx in token validation

2015-06-04 Thread Darren J Moffat



On 06/04/15 14:03, Fox, Kevin M wrote:

Some kind of intermediate mapping might be better. With ldap, I dont
have control over the groups users are assigned since thats an
enterprise/AD thing. There can be a lot of them. Groups to Role
relations I guess do that mapping. Though maybe passing groups directly
when domains can have different group meanings might be a big problem.


Agreed, and this has caused problems for other systems in the past.

For example the traditional AUTH_SYS as used by RPC for NFS only allowed 
a user to be in 16 groups because that was all the payload could hold. 
As more people moved from NIS to LDAP (and for some even when in NIS or 
NIS+) 16 groups was a big issue.


Now modern Linux and Solaris kernels support a user being in 1024 groups 
by having the consumer (the NFS server usually) check with the directory 
server (usually LDAP) when the list is exactly 16 groups.


So we know it is already common for LDAP directories to have users in a 
significant number of groups.


--
Darren J Moffat

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Sean Dague
On 06/04/2015 08:52 AM, Adam Young wrote:
> On 06/04/2015 06:32 AM, Sean Dague wrote:
>> On 06/03/2015 08:40 PM, Tim Hinrichs wrote:
>>> As long as there's some way to get the *declarative* policy from the
>>> system (as a data file or as an API call) that sounds fine.  But I'm
>>> dubious that it will be easy to keep the API call that returns the
>>> declarative policy in sync with the actual code that implements that
>>> policy.
>> Um... why? Nova (or any other server project) needs to know what the
>> currently computed policy is to actually enforce it internally. Turning
>> around and spitting that back out on the wire is pretty straight forward.
>>
>> Is there some secret dragon I'm missing here?
> 
> No.  But it is a significant bit of coding to do;  you would need to
> crawl every API and make sure you hit every code path that could enforce
> policy.  

Um, I don't understand that.

I'm saying that you'd "GET https://my.nova.api.server/policy";

And it would return basically policy.json. There is no crawling every
bit, this is a standard entry point to return a policy representation.
Getting all services to implement this would mean that Keystone could
support interesting policy things with arbitrary projects, not just a
small curated list, which is going to be really important in a big tent
world. Monasca and  Murano are just as important to support here as Nova
and Swift.

> However, I've contemplated doing something like that with
> oslo.policy already;  run a workload through a server with policy
> non-enforcing (Permissive mode) and log the output to a file, then use
> that output to modify either the policy or the delegations (role
> assignments or trusts) used in a workflow.
> 
> The Hard coded defaults worry me, though.  Nova is one piece (a big one,
> admittedly) of a delicate dance across multiple (not-so-micro) services
> that make up OpenStack.  Other serivces are going to take their cue from
> what Nova does, and that would make the overall flow that much harder to
> maintain.

I don't understand why having hard coded defaults makes things harder,
as long as they are discoverable. Defaults typically make things easier,
because people then only change what they need, instead of setting a
value for everything, having the deployment code update, and making
their policy miss an important thing, or make something wrong because
they didn't update it correctly at the same time as code.

> I think we need to break some very ingrained patterns in out policy
> enforcement.  I would worry that enforcing policy in code would give us
> something that we could not work around.  Instead, I think we need to
> ensure that the  Nova team leads the rest of the OpenStack core services
> in setting up best practices, and that is primarily a communication
> issue.  Getting to a common understanding of RBAC, and making it clear
> how roles are modified on a per-api basis will make Nova more robust.

So I feel like I understand the high level dynamic policy end game. I
feel like what I'm proposing for policy engine with encoded defaults
doesn't negatively impact that. I feel there is a middle chunk where
perhaps we've got different concerns or different dragons that we see,
and are mostly talking past each other. And I don't know how to bridge
that. All the keystone specs I've dived into definitely assume a level
of understanding of keystone internals and culture that aren't obvious
from the outside. I'll be honest, if the only way to collaborate here is
for every project to fully load all of keystone architecture into their
heads, I think this effort is going to stall out. Which would suck.

So maybe we need to step back and explain what the challenges are, what
changes are expected at each step (for developers and operators), and
err on the side of over explaining things so folks that are not familiar
with all the nuances addressed here can still see the flow.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][trove] Protected openstack resources

2015-06-04 Thread Doug Hellmann
Excerpts from Amrith Kumar's message of 2015-06-04 12:46:37 +:
> John,
> 
> Thanks for your note. I've updated the review at 
> https://review.openstack.org/#/c/186357/ with answers to some of your 
> questions (and I added you to that review).
> 
> Trove's use-case like some of the other projects listed is different from 
> Glance in that Trove has a guest agent. I've tried to explain that in more 
> detail in patch set 5. I'd appreciate your comments.

We solved this in Akanda by placing the service VMs in a special
tenant, isolating them with security group rules, and then giving
the agent running in the VM a REST API connected to a private
management network owned by the same tenant that owns the VM. All
communication with the agent starts from a service on the outside,
through that management network. The VMs act as routers, so they
are also attached to the cloud-user's networks, but the agent doesn't
respond on those networks.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposing Assaf Muller for the Neutron Core Reviewer Team

2015-06-04 Thread Assaf Muller
Thank you.

We have a lot of work ahead of us :)


- Original Message -
> It's a been a week since I proposed this, with no objections. Welcome to the
> Neutron core reviewer team as the new QA Lieutenant Assaf!
> 
> On Tue, Jun 2, 2015 at 12:35 PM, Maru Newby < ma...@redhat.com > wrote:
> 
> 
> +1 from me, long overdue!
> 
> 
> > On May 28, 2015, at 9:42 AM, Kyle Mestery < mest...@mestery.com > wrote:
> > 
> > Folks, I'd like to propose Assaf Muller to be a member of the Neutron core
> > reviewer team. Assaf has been a long time contributor in Neutron, and he's
> > also recently become my testing Lieutenant. His influence and knowledge in
> > testing will be critical to the team in Liberty and beyond. In addition to
> > that, he's done some fabulous work for Neutron around L3 HA and DVR. Assaf
> > has become a trusted member of our community. His review stats place him
> > in the pack with the rest of the Neutron core reviewers.
> > 
> > I'd also like to take this time to remind everyone that reviewing code is a
> > responsibility, in Neutron the same as other projects. And core reviewers
> > are especially beholden to this responsibility. I'd also like to point out
> > that +1/-1 reviews are very useful, and I encourage everyone to continue
> > reviewing code even if you are not a core reviewer.
> > 
> > Existing Neutron cores, please vote +1/-1 for the addition of Assaf to the
> > core reviewer team.
> > 
> > Thanks!
> > Kyle
> > 
> > [1] http://stackalytics.com/report/contribution/neutron-group/180
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][barbican] Regarding exposing X-Group-xxxx in token validation

2015-06-04 Thread Lance Bragstad
On Thu, Jun 4, 2015 at 8:18 AM, Dolph Mathews 
wrote:

> To clarify: we already have to include the groups produced as a result of
> federation mapping **in the payload** of Fernet tokens so that scoped
> tokens can be created later:
>
>
> https://github.com/openstack/keystone/blob/a637ebcbc4a92687d3e80a50cbe88df3b13c79e6/keystone/token/providers/fernet/token_formatters.py#L523
>
> These are OpenStack group IDs, so it's up to the deployer to keep those
> under control to keep Fernet token sizes down. It's the only place in the
> current Fernet implementation that's (somewhat alarmingly) unbounded in the
> real world.
>
> But we do **not** have a use case to add groups to *all* Fernet payloads:
> only to token creation & validation responses.
>

Ah, that makes sense. So we would be adding logic to get_token_data() [0]
that would allow for groups to be populated in the response based on the
user id? For that we shouldn't need anything in the token outside of the
user_id, right? We would just need get_token_data to call the identity_api
for the groups a user belongs to [1]. This makes sense, I was thinking we
were going to pull all groups *inside* the Fernet payload.

[0]
https://github.com/openstack/keystone/blob/a637ebcbc4a92687d3e80a50cbe88df3b13c79e6/keystone/token/providers/common.py#L413
[1]
https://github.com/openstack/keystone/blob/a637ebcbc4a92687d3e80a50cbe88df3b13c79e6/keystone/identity/core.py#L977


>
> On Thu, Jun 4, 2015 at 2:36 AM, Morgan Fainberg  > wrote:
>
>> For Fernet, the groups would only be populated on validate as Dolph
>> outlined. They would not be added to the core payload. We do not want to
>> expand the payload in this manner.
>>
>> --Morgan
>>
>> Sent via mobile
>>
>> On Jun 3, 2015, at 21:51, Lance Bragstad  wrote:
>>
>> I feel if we allowed group ids to be an attribute of the Fernet's core
>> payload, we continue to open up the possibility for tokens to be greater
>> than the initial "acceptable" size limit for a Fernet token (which I
>> believe was 255 bytes?). With this, I think we need to provide guidance on
>> the number of group ids allowed within the token before that size limit is
>> compromised.
>>
>> We've landed patches recently that allow for id strings to be included in
>> the Fernet payload [0], regardless of being uuid format (which can be
>> converted to bytes before packing to save space, this is harder for us to
>> do with non-uuid format id strings). This can also cause the Fernet token
>> size to grow. If we plan to include more information in the Fernet token
>> payload I think we should determine if the original acceptable size limit
>> still applies and regardless of what that size limit is provide some sort
>> of "best practices" for helping deployments keep their token size as small
>> as possible.
>>
>>
>> Keeping the tokens user (and developer) friendly was a big plus in the
>> design of Fernet, and providing resource for deployments to maintain that
>> would be helpful.
>>
>>
>> [0]
>> https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:bug/1459382,n,z
>>
>> On Wed, Jun 3, 2015 at 10:19 PM, Steve Martinelli 
>> wrote:
>>
>>> Dozens to hundreds of roles or endpoints could cause an issue now :)
>>>
>>> But yeah, groups are much more likely to number in the dozens than roles
>>> or endpoints. But I think the Fernet token size is so small that it could
>>> probably handle this (since it does so now for the federated workflow).
>>>
>>> Thanks,
>>>
>>> Steve Martinelli
>>> OpenStack Keystone Core
>>>
>>>
>>>
>>> From:"Fox, Kevin M" 
>>> To:"OpenStack Development Mailing List (not for usage
>>> questions)" 
>>> Date:06/03/2015 11:14 PM
>>> Subject:Re: [openstack-dev] [keystone][barbican] Regarding
>>>exposingX-Group- in token validation
>>> --
>>>
>>>
>>>
>>> Will dozens to a hundred groups or so on one user cause issues? :)
>>>
>>> Thanks,
>>> Kevin
>>>
>>> --
>>> *From:* Morgan Fainberg
>>> * Sent:* Wednesday, June 03, 2015 7:23:22 PM
>>> * To:* OpenStack Development Mailing List (not for usage questions)
>>> * Subject:* Re: [openstack-dev] [keystone][barbican] Regarding exposing
>>> X-Group- in token validation
>>>
>>> In general I am of the opinion with the move to Fernet there is no good
>>> reason we should avoid adding the group information into the token.
>>>
>>> --Morgan
>>>
>>> Sent via mobile
>>>
>>> On Jun 3, 2015, at 18:44, Dolph Mathews <*dolph.math...@gmail.com*
>>> > wrote:
>>>
>>>
>>> On Wed, Jun 3, 2015 at 5:58 PM, John Wood <*john.w...@rackspace.com*
>>> > wrote:
>>> Hello folks,
>>>
>>> There has been discussion about adding user group support to the
>>> per-secret access control list (ACL) feature in Barbican. Hence secrets
>>> could be marked as accessible by a group on the ACL rather than an
>>> individual user as implemented now.
>>>
>>> Our understanding is that Keystone does not pass 

Re: [openstack-dev] [keystone][reseller] New way to get a project scoped token by name

2015-06-04 Thread Rodrigo Duarte
First I have some questions: if we are going to add a delimiter, how can we
handle OpenStack API Stability guidelines [1]? If we add this delimiter in
keystone.conf (and having the default value being "." or "/"), do we fall
into the same API stability problems?

Personally, I'm in favor of having a way to represent the hierarchy since
it will loose the naming restrictions across projects and domains entities
- resulting in a better UX. The only remaining restriction will be that we
won't be able to create an entity with the same name in the same level of
the hierarchy. In addition, besides the token request, using a delimiter
also solves the problem of representing a hierarchy in SAML Assertions
generated by a keystone identity provider as well.

Also, if we are going to use a delimiter, we need to update the way
projects names are returned in the GET v3/projects API to include the
hierarchy so the user (or client) knows how to request a token using the
project name.

On Thu, Jun 4, 2015 at 4:33 AM, David Chadwick 
wrote:

> I agree that it is better to choose one global delimiter (ideally this
> should have been done from day one, when hierarchical naming should have
> been used as the basic name form for Openstack). Whatever you choose now
> will cause someone somewhere some pain, but perhaps the overall pain to
> the whole community will be less if you dictate what this delimiter is
> going to be now, but dont introduce for a year. This allows everyone a
> year to remove the delimiter from their names.
>
> regards
>
> David
>
> On 03/06/2015 22:05, Morgan Fainberg wrote:
> > Hi David,
> >
> > There needs to be some form of global hierarchy delimiter - well more to
> > the point there should be a common one across OpenStack installations to
> > ensure we are providing a good and consistent (and more to the point
> > inter-operable) experience to our users. I'm worried a custom defined
> > delimiter (even at the domain level) is going to make it difficult to
> > consume this data outside of the context of OpenStack (there are
> > applications that are written to use the APIs directly).
> >
> > The alternative is to explicitly list the delimiter in the project (
> > e.g. {"hierarchy": {"delim": ".", "domain.project.project2"}} ). The
> > additional need to look up the delimiter / set the delimiter when
> > creating a domain is likely to make for a worse user experience than
> > selecting one that is not different across installations.
> >
> > --Morgan
> >
> > On Wed, Jun 3, 2015 at 12:19 PM, David Chadwick  > > wrote:
> >
> >
> >
> > On 03/06/2015 14:54, Henrique Truta wrote:
> > > Hi David,
> > >
> > > You mean creating some kind of "delimiter" attribute in the domain
> > > entity? That seems like a good idea, although it does not solve the
> > > problem Morgan's mentioned that is the global hierarchy delimiter.
> >
> > There would be no global hierarchy delimiter. Each domain would
> define
> > its own and this would be carried in the JSON as a separate
> parameter so
> > that the recipient can tell how to parse hierarchical names
> >
> > David
> >
> > >
> > > Henrique
> > >
> > > Em qua, 3 de jun de 2015 às 04:21, David Chadwick
> > > mailto:d.w.chadw...@kent.ac.uk>
> > >>
> > escreveu:
> > >
> > >
> > >
> > > On 02/06/2015 23:34, Morgan Fainberg wrote:
> > > > Hi Henrique,
> > > >
> > > > I don't think we need to specifically call out that we want a
> > > domain, we
> > > > should always reference the namespace as we do today.
> > Basically, if we
> > > > ask for a project name we need to also provide it's
> > namespace (your
> > > > option #1). This clearly lines up with how we handle
> projects in
> > > domains
> > > > today.
> > > >
> > > > I would, however, focus on how to represent the namespace in
> > a single
> > > > (usable) string. We've been delaying the work on this for a
> > while
> > > since
> > > > we have historically not provided a clear way to delimit the
> > > hierarchy.
> > > > If we solve the issue with "what is the delimiter" between
> > domain,
> > > > project, and subdomain/subproject, we end up solving the
> > usability
> > >
> > > why not allow the top level domain/project to define the
> > delimiter for
> > > its tree, and to carry the delimiter in the JSON as a new
> > parameter.
> > > That provides full flexibility for all languages and locales
> > >
> > > David
> > >
> > > > issues with proposal #1, and not breaking the current
> > behavior you'd
> > > > expect with implementing option #2 (which at face value
> feels to
> > > be API
> > > > incompat

Re: [openstack-dev] [Neutron] L3 agent rescheduling issue

2015-06-04 Thread Kevin Benton
Why don't we put the agent heartbeat into a separate greenthread on the
agent so it continues to send updates even when it's busy processing
changes?
On Jun 4, 2015 2:56 AM, "Anna Kamyshnikova" 
wrote:

> Hi, neutrons!
>
> Some time ago I discovered a bug for l3 agent rescheduling [1]. When there
> are a lot of resources and agent_down_time is not big enough neutron-server
> starts marking l3 agents as dead. The same issue has been discovered and
> fixed for DHCP-agents. I proposed a change similar to those that were done
> for DHCP-agents. [2]
>
> There is no unified opinion on this bug and proposed change, so I want to
> ask developers whether it worth to continue work on this patch or not.
>
> [1] - https://bugs.launchpad.net/neutron/+bug/1440761
> [2] - https://review.openstack.org/171592
>
> --
> Regards,
> Ann Kamyshnikova
> Mirantis, Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Targeting icehouse-eol?

2015-06-04 Thread Alan Pevec
>>> The only open question I have is if we need to do an Icehouse point release
>>> prior to the tag and dropping the branch, but I don't think that's happened
>>> in the past with branch end of life - the eol tag basically serves as the
>>> placeholder to the last 'release'.
>>
>> I don't think we need to do a point release, there will be the icehouse-eol
>> tag which will mark the same thing. But, even if we later decide to add a
>> point release to mark the same thing it is trivial to push another tag for
>> the same sha1.
>
> I CC-ed the stable branch release managers for their opinion on it. We
> definitely announced a 2014.1.5 last icehouse release, so I think we
> should probably do one. Ideally we would have time to coordinate it in
> the coming week so that both plans are compatible.

Based on previoius 15 months plan, 2014.1.5 was targeting July 2015,
so releasing it next week would be close enough:
https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Ficehouse_releases

I'm not sure if release machinery would work after removing the branch
so let's release this last one (codename: Farewell ?) point release. I
can do this next week after we finish pending reviews.

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Time to decide something on the vendor tools repo

2015-06-04 Thread John Trowbridge


On 06/04/2015 09:29 AM, Dmitry Tantsur wrote:
> Hi again!
> 
> ~ half an hour has passed since my last email, and now I have one more
> question to discuss and decide!

At this rate, we could match the [all] tag today.

> 
> On the summit we were discussing things like chassis discovery, and
> arrived at rough conclusion that we want it to be somewhere in a
> separate repo. More precisely, we wanted some place for vendor to
> contribute code (aka scripts) that aren't good fit for both standard
> interfaces and existing vendor passthrough (chassis discovery again is a
> good example).
> 
> I suggest to decide something finally to unblock people. A few questions
> follow:
> 
> Should we
> 1. create one repo for all vendors (say, ironic-contrib-tools)

As this is for vendor-specific stuff, I think there is a good chance
that there will not be a lot of cross-vendor reviews.

> 2. create a repo for every vendor appearing
> 3. ask vendors to go for stackforge, at least until their solution
> shapes (like we did with inspector)?

It seems like 2 and 3 are the same except for ownership and location of
the repos. I think it makes more sense for vendors to own their own
repos on stackforge at least until there is enough interest outside of
that vendor to get good external reviews.

> 4. %(your_variant)s
> 
> If we go down 1-2 route, should
> 1. ironic-core team own the new repo(s)?
> 2. or should we form a new team from interested people?
> (1 and 2 and not exclusive actually).
> 
> I personally would go for #3 - stackforge. We already have e.g.
> stackforge/proliantutils as an example of something closely related to
> Ironic, but still independent.
> 
> I'm also fine with #1#1 (one repo, owned by group of interested people).
> 
> What do you think?
> 
> Dmitry
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Why do we drop branches? (WAS: Re: Targeting icehouse-eol?)

2015-06-04 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 06/04/2015 04:15 PM, Alan Pevec wrote:
 The only open question I have is if we need to do an Icehouse
 point release prior to the tag and dropping the branch, but I
 don't think that's happened in the past with branch end of
 life - the eol tag basically serves as the placeholder to the
 last 'release'.
>>> 
>>> I don't think we need to do a point release, there will be the
>>> icehouse-eol tag which will mark the same thing. But, even if
>>> we later decide to add a point release to mark the same thing
>>> it is trivial to push another tag for the same sha1.
>> 
>> I CC-ed the stable branch release managers for their opinion on
>> it. We definitely announced a 2014.1.5 last icehouse release, so
>> I think we should probably do one. Ideally we would have time to
>> coordinate it in the coming week so that both plans are
>> compatible.
> 
> Based on previoius 15 months plan, 2014.1.5 was targeting July
> 2015, so releasing it next week would be close enough: 
> https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Fi
cehouse_releases
>
>  I'm not sure if release machinery would work after removing the
> branch so let's release this last one (codename: Farewell ?) point
> release. I can do this next week after we finish pending reviews.
> 

Why do we even drop stable branches? If anything, it introduces
unneeded problems to those who have their scripts/cookbooks set to
chase those branches. They would need to switch to eol tag. Why not
just leaving them sitting there, marked read only?

It becomes especially important now that we say that stable HEAD *is*
a stable release.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVcF9MAAoJEC5aWaUY1u57MV8H/jOOJYo61Gkb4uC7sNrxi1Kf
WLRyA5f6ANecir7y05NbSvX4EaNgTZ5PeFbGwE3TJHIj/JSOu4lgRBYVyHh0Tm3x
wu9KBbB9Qa+jakvMgygwLYlaVNCyVDtfNGLlto9IxAvbfK00/Bn/6kktycezQBuQ
152esL2gh+L1f+K5EDdNhwPdLGVe4pMf8mr7575X6Zc2xnfHDtac8oJecIT7fKjT
0CCe/1CzlY8nV8OIYNa4C+p32VAeHk5BEVYmMOKYbtALDqsUBoZLivuONjMXRwwE
9OqrMk5wcCjMB4y+550RylzkSvnyEj++sM/yIK5TEq2AwzIhAA+HRskrhewquVs=
=C92c
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion guideline in API-WG

2015-06-04 Thread Ruby Loo
On 4 June 2015 at 02:58, Xu, Hejie  wrote:

>  ...
> And another guideline for when we should bump Mircoversion
> *https://review.openstack.org/#/c/187896/*
> 
>
>

This is timely because just this very minute I was going to send out email
to the Ironic community about this -- when *should* we bump the
microversion. For fear of hijacking this thread, I'm going to start a new
thread to discuss with Ironic folks first.


> As I know, there already have a little different between Nova and Ironic’s
> implementation. Ironic return min/max version when the requested
> version doesn’t support in server by http-headers. There isn’t such thing
> in nova. But that is something for version negotiation we need for nova
> also.
> Sean have pointed out we should use response body instead of http headers,
> the body can includes error message. Really hope ironic team can take a
> look at if you guys have compelling reason for using http headers.
>

I don't want to change the ironic code so let's go with http headers.
(That's a good enough reason, isn't it?)  :-)

By the way, did you see Ironic's spec on the/our desired behaviour between
Ironic's server and client [1]? It's ... .

Thanks Alex!

--ruby

[1]
http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][reseller] New way to get a project scoped token by name

2015-06-04 Thread David Stanek
On Thu, Jun 4, 2015 at 10:10 AM Rodrigo Duarte 
wrote:

>
> Also, if we are going to use a delimiter, we need to update the way
> projects names are returned in the GET v3/projects API to include the
> hierarchy so the user (or client) knows how to request a token using the
> project name.
>

This comment made me think of something (maybe crazy)...

I'm hoping that we are not expecting a client to understand how to create
the hierarchical representation of the fully qualified project name and
instead feed it to them in the JSON responses.

If this is the case we can probably control the delimiter on the server
side without the client even knowing that there is a delimiter. We could
have the first character of this new property actually contain the
delimiter used.

  ".A.B.C" - would mean that "." is the delimiter
  - vs -
  "#A#B#C" - would mean that "#" is the delimiter

The challenge left is to figure out which delimiter to use.

-- David
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Ruby Loo
Hi,

In Kilo, we introduced microversions but it seems to be a work-in-progress.
There is an effort now to add microversion into the API-WG's guidelines, to
provide a consistent way of using microversions across OpenStack projects
[1]. Specifically, in the context of this email, there is a proposed
guideline for when to bump the microversion [2].

Last week, in an IRC discussion [3], Devananda suggested that we bump the
microversion in these situations: ' "required for any
non-backwards-compatible change, and strongly encouraged for any
significant features" ? (and yes, that's subjective) '.

What do people think of that? I think it is clear that we should do it for
any non-backwards-compatible change. The subjective part worries me a bit
-- who decides, the feature submitter or the cores or ?

Alternatively, if people aren't too worried or care that much, we can
decide to follow the guideline [3] (and limp along until that guideline is
finalized).

--ruby


[1] http://lists.openstack.org/pipermail/openstack-dev/2015-June/065793.html
[2] https://review.openstack.org/#/c/187896/
[3] around 2015-05-26T16:29:05,
http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2015-05-26.log
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Dmitry Tantsur

On 06/04/2015 04:40 PM, Ruby Loo wrote:

Hi,

In Kilo, we introduced microversions but it seems to be a
work-in-progress. There is an effort now to add microversion into the
API-WG's guidelines, to provide a consistent way of using microversions
across OpenStack projects [1]. Specifically, in the context of this
email, there is a proposed guideline for when to bump the microversion [2].


As I understand this guideline tells to bump microversion on every 
change which I strongly -1 as usual. Reason: it's bump for the sake of 
bump, without any direct benefit for users (no, API discoverability is 
not one, because microversion do not solve it).


I'll post the same comment to the guideline.



Last week, in an IRC discussion [3], Devananda suggested that we bump
the microversion in these situations: ' "required for any
non-backwards-compatible change, and strongly encouraged for any
significant features" ? (and yes, that's subjective) '.

What do people think of that? I think it is clear that we should do it
for any non-backwards-compatible change. The subjective part worries me
a bit -- who decides, the feature submitter or the cores or ?


My vote is: if we can proof that a sane user will be broken by the 
change, we bump the microversion.




Alternatively, if people aren't too worried or care that much, we can
decide to follow the guideline [3] (and limp along until that guideline
is finalized).

--ruby


[1] http://lists.openstack.org/pipermail/openstack-dev/2015-June/065793.html
[2] https://review.openstack.org/#/c/187896/
[3] around 2015-05-26T16:29:05,
http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2015-05-26.log


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas]Error at listener's barbican container validation

2015-06-04 Thread Phillip Toohill
I think there may just be some non-updated docstrings here. We should take only 
the 'ref'. If we get a UUID we wont know how to build it or what to do with it. 
The naming of this does need to change, but I dont' believe we need to convert 
UUID to ref.


Is 'self.get_cert_ref_url(cert_ref)​' something already available that i may be 
over looking or an example?


IMO, we need to go through and make sure the 'container_ref' is updated to be 
consistent throughout the project for the barbican stuff and is something I 
plan on doing once I get a chance if no body else does.


Phillip V. Toohill III
Software Developer
[http://600a2794aa4ab5bae6bd-8d3014ab8e4d12d3346853d589a26319.r53.cf1.rackcdn.com/signatures/images/rackspace_logo.png]
phone: 210-312-4366
mobile: 210-440-8374


From: santosh sharma 
Sent: Thursday, June 4, 2015 12:08 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron][lbaas]Error at listener's barbican container 
validation


There is error while validating the barabican containers associated with 
listener (tls and sni container) at plugin layer.

In validate_tls_container() ,contain_id is passed where as it is expecting 
container_ref_url.

 def _validate_tls(self, listener, curr_listener=None):

def validate_tls_container(container_ref):
...

   def validate_tls_containers(to_validate):
for container_ref in to_validate:
validate_tls_container(container_ref)
   ...
   if len(to_validate) > 0:
validate_tls_containers(to_validate)

#to_validate is list of container_ids.

# barbican_cert_manager.py  ,at  get_cert()  cert_ref is  UUID instead of 
ref_url for container.

def get_cert(cert_ref, service_name='Octavia', resource_ref=None,
 check_only=False, **kwargs):

 ...
 :param cert_ref: the UUID of the cert to retrieve
 ...

 cert_container = connection.containers.get(
container_ref=cert_ref

#above container_ref is a UUID whereas connection.container.get() expects a 
reference url.

We should prepare ref_url from container UUID before passing to barbican client 
apis.
following should fix the issue.

diff --git a/neutron_lbaas/common/cert_manager/barbican_cert_manager.py 
b/neutron_lbaas/common/cert_manager/barbican_cert_manager.py
index 1ad38ee..8d3c3c4 100644
--- a/neutron_lbaas/common/cert_manager/barbican_cert_manager.py
+++ b/neutron_lbaas/common/cert_manager/barbican_cert_manager.py
@@ -219,6 +222,9 @@ class CertManager(cert_manager.CertManager):
 """
 connection = BarbicanKeystoneAuth.get_barbican_client()

+if self.is_UUID(cert_ref):
+cert_ref = self.get_cert_ref_url(cert_ref)
+

Error log:
---
ERROR neutron_lbaas.common.cert_manager.barbican_cert_manager 
[req-a5e704fb-f04b-45f2-9c50-f3bfebe09afd admin 5ca9f
cbf4652456a9bd53582b86bd0e9] Error getting 0b8d5af0-c156-46ad-b4c6-882a84824ce2
2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager Traceback (most recent 
call last):
2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/common
/cert_manager/barbican_cert_manager.py", line 228, in get_cert
2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager 
container_ref=cert_ref
2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager   File 
"/opt/stack/python-barbicanclient/barbicanclie
nt/containers.py", line 528, in get
2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager 
base.validate_ref(container_ref, 'Container')
2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager   File 
"/opt/stack/python-barbicanclient/barbicanclie
nt/base.py", line 35, in validate_ref
2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager raise 
ValueError('{0} incorrectly specified.'.for
mat(entity))
2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager ValueError: Container 
incorrectly specified.
2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager
2015-06-04 09:58:38.167 INFO neutron.api.v2.resource 
[req-a5e704fb-f04b-45f2-9c50-f3bfebe09afd admin 
5ca9fcbf4652456a9bd53582b86bd0e9] crea
te failed (client error): TLS container 0b8d5af0-c156-46ad-b4c6-882a84824ce2 
could not be found
---
--

Thanks
Santosh
__
OpenStack Development Mailing List (not for usage

Re: [openstack-dev] [Neutron] L3 agent rescheduling issue

2015-06-04 Thread Salvatore Orlando
One reason for not sending the heartbeat from a separate greenthread could
be that the agent is already doing it [1].
The current proposed patch addresses the issue blindly - that is to say
before declaring an agent dead let's wait for some more time because it
could be stuck doing stuff. In that case I would probably make the
multiplier (currently 2x) configurable.

The reason for which state report does not occur is probably that both it
and the resync procedure are periodic tasks. If I got it right they're both
executed as eventlet greenthreads but one at a time. Perhaps then adding an
initial delay to the full sync task might ensure the first thing an agent
does when it comes up is sending a heartbeat to the server?

On the other hand, while doing the initial full resync, is the  agent able
to process updates? If not perhaps it makes sense to have it down until it
finishes synchronisation.

Salvatore

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/l3/agent.py#n587

On 4 June 2015 at 16:16, Kevin Benton  wrote:

> Why don't we put the agent heartbeat into a separate greenthread on the
> agent so it continues to send updates even when it's busy processing
> changes?
> On Jun 4, 2015 2:56 AM, "Anna Kamyshnikova" 
> wrote:
>
>> Hi, neutrons!
>>
>> Some time ago I discovered a bug for l3 agent rescheduling [1]. When
>> there are a lot of resources and agent_down_time is not big enough
>> neutron-server starts marking l3 agents as dead. The same issue has been
>> discovered and fixed for DHCP-agents. I proposed a change similar to those
>> that were done for DHCP-agents. [2]
>>
>> There is no unified opinion on this bug and proposed change, so I want to
>> ask developers whether it worth to continue work on this patch or not.
>>
>> [1] - https://bugs.launchpad.net/neutron/+bug/1440761
>> [2] - https://review.openstack.org/171592
>>
>> --
>> Regards,
>> Ann Kamyshnikova
>> Mirantis, Inc
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Sean Dague
On 06/04/2015 10:50 AM, Dmitry Tantsur wrote:
> On 06/04/2015 04:40 PM, Ruby Loo wrote:
>> Hi,
>>
>> In Kilo, we introduced microversions but it seems to be a
>> work-in-progress. There is an effort now to add microversion into the
>> API-WG's guidelines, to provide a consistent way of using microversions
>> across OpenStack projects [1]. Specifically, in the context of this
>> email, there is a proposed guideline for when to bump the microversion
>> [2].
> 
> As I understand this guideline tells to bump microversion on every
> change which I strongly -1 as usual. Reason: it's bump for the sake of
> bump, without any direct benefit for users (no, API discoverability is
> not one, because microversion do not solve it).
> 
> I'll post the same comment to the guideline.

Backwards compatible API adds with no user signaling is a fallacy
because it assumes the arrow of time flows only one way.

If at version 1.5 you have a resource that is

foo {
  "bar": ...
}

And then you decide you want to add another attribute

foo {
  "bar": ...
  "baz": ...
}

And you don't bump the version, you'll get a set of users that use a
cloud with baz, and incorrectly assume that version 1.5 of the API means
that baz will always be there. Except, there are lots of clouds out
there, including ones that might be at the code commit before it was
added. Because there are lots of deploys in the world, your users can
effectively go back in time.

So now your API definition for version 1.5 is:

"foo, may or may not contain baz, and there is no way of you knowing if
it will until you try. good luck."

Which is pretty aweful.

Looking at your comments in the WG repo you also seem to only be
considering projects shipped at Major release versions (Kilo, Liberty).
Which might be true of Red Hat's product policy, but it's not generally
true that all clouds are at a release boundary. Continous Deployment of
OpenStack has been a value from Day 1, and many public clouds are not
using releases, but are using arbitrary points off of master. A
microversion describes when a changes happens so that applications
writers have a very firm contract about what they are talking to.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Time to decide something on the vendor tools repo

2015-06-04 Thread Ruby Loo
On 4 June 2015 at 09:29, Dmitry Tantsur  wrote:

On the summit we were discussing things like chassis discovery, and arrived
> at rough conclusion that we want it to be somewhere in a separate repo.
> More precisely, we wanted some place for vendor to contribute code (aka
> scripts) that aren't good fit for both standard interfaces and existing
> vendor passthrough (chassis discovery again is a good example).
>
> Our summit notes are sparse on this topic [1], but I'll add in what I see
there.


> I suggest to decide something finally to unblock people. A few questions
> follow:
>
> Should we
> 1. create one repo for all vendors (say, ironic-contrib-tools)
>

from summit, "the advantage of having them in one place is that they will
get used and improved by other vendors"

Presumably there'd be subdirectories, one for each vendor and/or based on
functionality.

The disadvantage maybe, is who will maintain/own (and merge stuff) in this
repo.


2. create a repo for every vendor appearing
>

from summit, "Creating an ironic-utils- repo on stackforge to host
each manufacturer set of tools and document the directo"


> 3. ask vendors to go for stackforge, at least until their solution shapes
> (like we did with inspector)?
>

I think whatever is decided, should be in stackforge.


> 4. %(your_variant)s
>
> If we go down 1-2 route, should
> 1. ironic-core team own the new repo(s)?
> 2. or should we form a new team from interested people?
> (1 and 2 and not exclusive actually).
>
> I personally would go for #3 - stackforge. We already have e.g.
> stackforge/proliantutils as an example of something closely related to
> Ironic, but still independent.
>
> I'm also fine with #1#1 (one repo, owned by group of interested people).
>

I don't think any of these third-party tools should be owned by the ironic
team.

I think that interested parties should get together (or not) to decide
where they'd like to put their stuff. Ironic wiki pages (or somewhere) can
have a link pointing to these other repo(s).

--ruby

[1] https://etherpad.openstack.org/p/liberty-ironic-rack-to-ready-state
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Lucas Alvares Gomes
Hi Ruby,

Thanks for starting this thread, just like you I've been always
confused about when and when not bump the microversioning of the API.

> Backwards compatible API adds with no user signaling is a fallacy
> because it assumes the arrow of time flows only one way.
>
> If at version 1.5 you have a resource that is
>
> foo {
>   "bar": ...
> }
>
> And then you decide you want to add another attribute
>
> foo {
>   "bar": ...
>   "baz": ...
> }
>
> And you don't bump the version, you'll get a set of users that use a
> cloud with baz, and incorrectly assume that version 1.5 of the API means
> that baz will always be there. Except, there are lots of clouds out
> there, including ones that might be at the code commit before it was
> added. Because there are lots of deploys in the world, your users can
> effectively go back in time.
>
> So now your API definition for version 1.5 is:
>
> "foo, may or may not contain baz, and there is no way of you knowing if
> it will until you try. good luck."
>
> Which is pretty aweful.
>

Oh, that's a good point, I can see the value on that.

Perhaps the guide should define bumping the micro version something
along these words: "Whenever a change is made to the API which is
visible to the client the micro version should be incremented" ?

This is powerful because gives the clients a fine grained way to
detect what are the API features available.

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Dmitry Tantsur

On 06/04/2015 05:03 PM, Sean Dague wrote:

On 06/04/2015 10:50 AM, Dmitry Tantsur wrote:

On 06/04/2015 04:40 PM, Ruby Loo wrote:

Hi,

In Kilo, we introduced microversions but it seems to be a
work-in-progress. There is an effort now to add microversion into the
API-WG's guidelines, to provide a consistent way of using microversions
across OpenStack projects [1]. Specifically, in the context of this
email, there is a proposed guideline for when to bump the microversion
[2].


As I understand this guideline tells to bump microversion on every
change which I strongly -1 as usual. Reason: it's bump for the sake of
bump, without any direct benefit for users (no, API discoverability is
not one, because microversion do not solve it).

I'll post the same comment to the guideline.


Backwards compatible API adds with no user signaling is a fallacy
because it assumes the arrow of time flows only one way.

If at version 1.5 you have a resource that is

foo {
   "bar": ...
}

And then you decide you want to add another attribute

foo {
   "bar": ...
   "baz": ...
}

And you don't bump the version, you'll get a set of users that use a
cloud with baz, and incorrectly assume that version 1.5 of the API means
that baz will always be there. Except, there are lots of clouds out
there, including ones that might be at the code commit before it was
added. Because there are lots of deploys in the world, your users can
effectively go back in time.

So now your API definition for version 1.5 is:

"foo, may or may not contain baz, and there is no way of you knowing if
it will until you try. good luck."

Which is pretty aweful.


Which is not very different from your definition. "Version 1.5 contains 
feature xyz, unless it's disabled by the configuration or patched out 
downstream. Well, 1.4 can also contain the feature, if downstream 
backported it. So good luck again."


If you allow to group features under one microversion, that becomes even 
worse - you can have deployment that got microversion only partially.


For example, that's what I would call API discoverability:

 $ ironic has-capability foobar
 true

and that's how it would play with versioning:

 $ ironic --ironic-api-version 1.2 has-capability foobar
 false
 $ ironic --ironic-api-version 1.6 has-capability foobar
 true

On the contrary, the only thing that microversion tells me is that the 
server installation is based on a particular upstream commit.


To me these are orthogonal problems, and I believe they should be solved 
differently. Our disagreement is due to seeing them as one problem.




Looking at your comments in the WG repo you also seem to only be
considering projects shipped at Major release versions (Kilo, Liberty).
Which might be true of Red Hat's product policy, but it's not generally
true that all clouds are at a release boundary. Continous Deployment of
OpenStack has been a value from Day 1, and many public clouds are not
using releases, but are using arbitrary points off of master.


I don't know why you decided I don't know it, but you're wrong.

> A

microversion describes when a changes happens so that applications
writers have a very firm contract about what they are talking to.


No, they don't. Too many things can modify behavior - see above.



-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-04 Thread Vahid S Hashemian
Hi Gosha,

Sorry, not sure why my last message was delivered as "undefined".

Anyways, thanks for pointing me to those materials.
I have a feeling though that due to the nature of Heat-Translator we would need 
to deal with HOT based templates and not MuranoPL.

Regards,
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud Labs



-Georgy Okrokvertskhov  wrote: -
To: "OpenStack Development Mailing List (not for usage questions)" 

From: Georgy Okrokvertskhov 
Date: 06/03/2015 03:26PM
Subject: Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

Hi,

Murano documentation about all internals is here: 
http://murano.readthedocs.org/en/latest/

You probably need to take some example applications from here: 
https://github.com/openstack/murano-apps

Take something simple like Tomcat and PostgresSQL. You will need to have an 
image for Ubuntu/Debian with murano agent. It could be downloaded form here: 
http://apps.openstack.org/#tab=glance-images&asset=Debian%208%20x64%20(pre-installed%20murano-agent)

Thanks
Gosha

On Wed, Jun 3, 2015 at 2:21 PM, Vahid S Hashemian  
wrote:
Thanks Gosha.

That's right. I have been using HOT based applications. I have not used 
workflows before and need to dig into them.
If you have any pointers on how to go about workflows please share them with me.

Thanks.

Regards,
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud Labs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposing Assaf Muller for the Neutron Core Reviewer Team

2015-06-04 Thread Miguel Lavalle
Congrats! Well deserved

On Thu, Jun 4, 2015 at 8:50 AM, Assaf Muller  wrote:

> Thank you.
>
> We have a lot of work ahead of us :)
>
>
> - Original Message -
> > It's a been a week since I proposed this, with no objections. Welcome to
> the
> > Neutron core reviewer team as the new QA Lieutenant Assaf!
> >
> > On Tue, Jun 2, 2015 at 12:35 PM, Maru Newby < ma...@redhat.com > wrote:
> >
> >
> > +1 from me, long overdue!
> >
> >
> > > On May 28, 2015, at 9:42 AM, Kyle Mestery < mest...@mestery.com >
> wrote:
> > >
> > > Folks, I'd like to propose Assaf Muller to be a member of the Neutron
> core
> > > reviewer team. Assaf has been a long time contributor in Neutron, and
> he's
> > > also recently become my testing Lieutenant. His influence and
> knowledge in
> > > testing will be critical to the team in Liberty and beyond. In
> addition to
> > > that, he's done some fabulous work for Neutron around L3 HA and DVR.
> Assaf
> > > has become a trusted member of our community. His review stats place
> him
> > > in the pack with the rest of the Neutron core reviewers.
> > >
> > > I'd also like to take this time to remind everyone that reviewing code
> is a
> > > responsibility, in Neutron the same as other projects. And core
> reviewers
> > > are especially beholden to this responsibility. I'd also like to point
> out
> > > that +1/-1 reviews are very useful, and I encourage everyone to
> continue
> > > reviewing code even if you are not a core reviewer.
> > >
> > > Existing Neutron cores, please vote +1/-1 for the addition of Assaf to
> the
> > > core reviewer team.
> > >
> > > Thanks!
> > > Kyle
> > >
> > > [1] http://stackalytics.com/report/contribution/neutron-group/180
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-04 Thread Vahid S Hashemian
undefined


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Sean Dague
On 06/04/2015 11:27 AM, Dmitry Tantsur wrote:
> On 06/04/2015 05:03 PM, Sean Dague wrote:
>> On 06/04/2015 10:50 AM, Dmitry Tantsur wrote:
>>> On 06/04/2015 04:40 PM, Ruby Loo wrote:
 Hi,

 In Kilo, we introduced microversions but it seems to be a
 work-in-progress. There is an effort now to add microversion into the
 API-WG's guidelines, to provide a consistent way of using microversions
 across OpenStack projects [1]. Specifically, in the context of this
 email, there is a proposed guideline for when to bump the microversion
 [2].
>>>
>>> As I understand this guideline tells to bump microversion on every
>>> change which I strongly -1 as usual. Reason: it's bump for the sake of
>>> bump, without any direct benefit for users (no, API discoverability is
>>> not one, because microversion do not solve it).
>>>
>>> I'll post the same comment to the guideline.
>>
>> Backwards compatible API adds with no user signaling is a fallacy
>> because it assumes the arrow of time flows only one way.
>>
>> If at version 1.5 you have a resource that is
>>
>> foo {
>>"bar": ...
>> }
>>
>> And then you decide you want to add another attribute
>>
>> foo {
>>"bar": ...
>>"baz": ...
>> }
>>
>> And you don't bump the version, you'll get a set of users that use a
>> cloud with baz, and incorrectly assume that version 1.5 of the API means
>> that baz will always be there. Except, there are lots of clouds out
>> there, including ones that might be at the code commit before it was
>> added. Because there are lots of deploys in the world, your users can
>> effectively go back in time.
>>
>> So now your API definition for version 1.5 is:
>>
>> "foo, may or may not contain baz, and there is no way of you knowing if
>> it will until you try. good luck."
>>
>> Which is pretty aweful.
> 
> Which is not very different from your definition. "Version 1.5 contains
> feature xyz, unless it's disabled by the configuration or patched out
> downstream. Well, 1.4 can also contain the feature, if downstream
> backported it. So good luck again."

The whole point of interop is you can't call it OpenStack if you are
patching it downstream to break the upstream contract. Microversions are
a contract.

Downstream can hack code all they want, it's no longer OpenStack when
they do. If they are ok with it, that's fine. But them taking OpenStack
code and making something that's not OpenStack is beyond the scope of
the problem here. This is about good actors, acting in good faith, to
provide a consistent experience to application writers.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-04 Thread Vahid S Hashemian
undefined


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposing Assaf Muller for the Neutron Core Reviewer Team

2015-06-04 Thread Paul Michali
+100 Great addition! Congratulations Assaf!

On Thu, Jun 4, 2015 at 11:41 AM Miguel Lavalle  wrote:

> Congrats! Well deserved
>
> On Thu, Jun 4, 2015 at 8:50 AM, Assaf Muller  wrote:
>
>> Thank you.
>>
>> We have a lot of work ahead of us :)
>>
>>
>> - Original Message -
>> > It's a been a week since I proposed this, with no objections. Welcome
>> to the
>> > Neutron core reviewer team as the new QA Lieutenant Assaf!
>> >
>> > On Tue, Jun 2, 2015 at 12:35 PM, Maru Newby < ma...@redhat.com > wrote:
>> >
>> >
>> > +1 from me, long overdue!
>> >
>> >
>> > > On May 28, 2015, at 9:42 AM, Kyle Mestery < mest...@mestery.com >
>> wrote:
>> > >
>> > > Folks, I'd like to propose Assaf Muller to be a member of the Neutron
>> core
>> > > reviewer team. Assaf has been a long time contributor in Neutron, and
>> he's
>> > > also recently become my testing Lieutenant. His influence and
>> knowledge in
>> > > testing will be critical to the team in Liberty and beyond. In
>> addition to
>> > > that, he's done some fabulous work for Neutron around L3 HA and DVR.
>> Assaf
>> > > has become a trusted member of our community. His review stats place
>> him
>> > > in the pack with the rest of the Neutron core reviewers.
>> > >
>> > > I'd also like to take this time to remind everyone that reviewing
>> code is a
>> > > responsibility, in Neutron the same as other projects. And core
>> reviewers
>> > > are especially beholden to this responsibility. I'd also like to
>> point out
>> > > that +1/-1 reviews are very useful, and I encourage everyone to
>> continue
>> > > reviewing code even if you are not a core reviewer.
>> > >
>> > > Existing Neutron cores, please vote +1/-1 for the addition of Assaf
>> to the
>> > > core reviewer team.
>> > >
>> > > Thanks!
>> > > Kyle
>> > >
>> > > [1] http://stackalytics.com/report/contribution/neutron-group/180
>> > >
>> __
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-04 Thread Vahid S Hashemian
Hi Serg,Sorry, I seem to be having issues sending messages to the mailing list.Thanks for your message. I can work on the blueprint spec. Just trying to get a good picture of related Murano processes and where the connection points to Heat-Translator should be.And I agreed with your comment on MuranoPL. I think for TOSCA support and integration with Heat-Translator we need to consider HOT based packages.Regards,-Vahid Hashemian, Ph.D.Advisory Software Engineer, IBM Cloud Labs-Serg Melikyan  wrote: -To: "OpenStack Development Mailing List (not for usage questions)" From: Serg Melikyan Date: 06/04/2015 06:31AMSubject: Re: [openstack-dev] [Murano] Help needed with TOSCA support in	MuranoHi Vahid,Your analysis is correct, and integration of heat-translator is assimple as you described that in your document. It would be reallyawesome if you would turn this PDF to the proper specification for theblueprint.P.S. Regarding several stack for applications - currently HOT-basedpackages create stack per application, and we don't support same levelof composition as we have in murano-pl based packages. This is anotherquestion for improvement.On Wed, Jun 3, 2015 at 12:44 AM, Georgy Okrokvertskhov wrote:> Hi Vahid,>> Thank you for sharing your thoughts.> I have a questions about application life-cycle if we use TOSCA translator.> In Murano the main advantage of using HOT format is that we can update Het> stack with resources as soon as we need to deploy additional application. We> can dynamically create multi-tier applications with using other apps as a> building blocks. Imagine Java app on tom of Tomcat (VM1) and PostgreDB> (VM2).  All three components are three different apps in  the catalog.> Murano allows you to bring them and deploy together.>> Do you think it will be possible to use TOSCA translator for Heat stack> updates? What we will do if we have two apps with two TOSCA templates like> Tomcat and Postgre. How we can combine them together?>> Thanks> Gosha>> On Tue, Jun 2, 2015 at 12:14 PM, Vahid S Hashemian>  wrote: This is my what I have so far. Would love to hear feedback on it. Thanks. Regards, ->> Vahid Hashemian, Ph.D.>> Advisory Software Engineer, IBM Cloud Labs __>> OpenStack Development Mailing List (not for usage questions)>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>> --> Georgy Okrokvertskhov> Architect,> OpenStack Platform Products,> Mirantis> http://www.mirantis.com> Tel. +1 650 963 9828> Mob. +1 650 996 3284>> __> OpenStack Development Mailing List (not for usage questions)> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>-- Serg Melikyan, Senior Software Engineer at Mirantis, Inc.http://mirantis.com | smelik...@mirantis.com__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-04 Thread Vahid S Hashemian
undefined


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Monty Taylor
On 06/04/2015 11:27 AM, Dmitry Tantsur wrote:
> On 06/04/2015 05:03 PM, Sean Dague wrote:
>> On 06/04/2015 10:50 AM, Dmitry Tantsur wrote:
>>> On 06/04/2015 04:40 PM, Ruby Loo wrote:
 Hi,

 In Kilo, we introduced microversions but it seems to be a
 work-in-progress. There is an effort now to add microversion into the
 API-WG's guidelines, to provide a consistent way of using microversions
 across OpenStack projects [1]. Specifically, in the context of this
 email, there is a proposed guideline for when to bump the microversion
 [2].
>>>
>>> As I understand this guideline tells to bump microversion on every
>>> change which I strongly -1 as usual. Reason: it's bump for the sake of
>>> bump, without any direct benefit for users (no, API discoverability is
>>> not one, because microversion do not solve it).
>>>
>>> I'll post the same comment to the guideline.
>>
>> Backwards compatible API adds with no user signaling is a fallacy
>> because it assumes the arrow of time flows only one way.
>>
>> If at version 1.5 you have a resource that is
>>
>> foo {
>>"bar": ...
>> }
>>
>> And then you decide you want to add another attribute
>>
>> foo {
>>"bar": ...
>>"baz": ...
>> }
>>
>> And you don't bump the version, you'll get a set of users that use a
>> cloud with baz, and incorrectly assume that version 1.5 of the API means
>> that baz will always be there. Except, there are lots of clouds out
>> there, including ones that might be at the code commit before it was
>> added. Because there are lots of deploys in the world, your users can
>> effectively go back in time.
>>
>> So now your API definition for version 1.5 is:
>>
>> "foo, may or may not contain baz, and there is no way of you knowing if
>> it will until you try. good luck."
>>
>> Which is pretty aweful.
> 
> Which is not very different from your definition. "Version 1.5 contains
> feature xyz, unless it's disabled by the configuration or patched out
> downstream. Well, 1.4 can also contain the feature, if downstream
> backported it. So good luck again."
> 
> If you allow to group features under one microversion, that becomes even
> worse - you can have deployment that got microversion only partially.
> 
> For example, that's what I would call API discoverability:
> 
>  $ ironic has-capability foobar
>  true
> 
> and that's how it would play with versioning:
> 
>  $ ironic --ironic-api-version 1.2 has-capability foobar
>  false
>  $ ironic --ironic-api-version 1.6 has-capability foobar
>  true
> 
> On the contrary, the only thing that microversion tells me is that the
> server installation is based on a particular upstream commit.
> 
> To me these are orthogonal problems, and I believe they should be solved
> differently. Our disagreement is due to seeing them as one problem.

We should stop doing this everywhere in OpenStack. It is the absolute
worst experience ever.

Stop allowing people to disable features with config. there is literally
no user on the face of the planet for whom this is a positive thing.

1.5 should mean that your server has Set(A) of features. 1.6 should mean
Set(A+B) - etc. There should be NO VARIATION and any variation on that
should basically mean that the cloud in question is undeniably broken.

I understand that vendors and operators keep wanting to wank around with
their own narccisitic arrogance to "differentiate" from one another.

STOP IT

Seriously, it causes me GIANT amount of pain and quite honestly if I
wasn't tied to using OpenStack because I work on it, I would have given
up on it a long time ago because of evil stuff like this.

So, seriously - let's grow up and start telling people that they do not
get to pick and choose user-visible feature sets. If they have an unholy
obsession with a particular backend technology that does not allow a
public feature of the API to work, then they are deploying a broken
cloud and they need to fix it.


>>
>> Looking at your comments in the WG repo you also seem to only be
>> considering projects shipped at Major release versions (Kilo, Liberty).
>> Which might be true of Red Hat's product policy, but it's not generally
>> true that all clouds are at a release boundary. Continous Deployment of
>> OpenStack has been a value from Day 1, and many public clouds are not
>> using releases, but are using arbitrary points off of master.
> 
> I don't know why you decided I don't know it, but you're wrong.
> 
>> A
>> microversion describes when a changes happens so that applications
>> writers have a very firm contract about what they are talking to.
> 
> No, they don't. Too many things can modify behavior - see above.
> 
>>
>> -Sean
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


_

Re: [openstack-dev] [Cinder] Getting `ValueError: Field `volume_id' cannot be None`

2015-06-04 Thread Thang Pham
The problem is in your test case.  There is no such methods as
"remotefs.db.snapshot_get" or "remotefs.db.snapshot_admin_metadata_get".
You need to use "with mock.patch('cinder.db.snapshot_get') as snapshot_get,
mock.patch('cinder.db.snapshot_admin_metadata_get')
as snapshot_admin_metadata_get".  These incorrect calls somehow created a
side effect in the other test cases.  I updated you patch with what is
correct, so you should follow it for you other tests.  Your test case needs
a lot more work, I just edited it to just have it pass the unit tests.

Thang

On Thu, Jun 4, 2015 at 4:36 AM, Deepak Shetty  wrote:

> I was able to narrow down to the scenario where it fails only when i do:
>
> ./run_tests.sh -N cinder.tests.unit.test_remotefs
> cinder.tests.unit.test_volume.VolumeTestCase
>
> and fails with:
> {0}
> cinder.tests.unit.test_volume.VolumeTestCase.test_can_delete_errored_snapshot
> [0.507361s] ... FAILED
>
> Captured traceback:
> ~~~
> Traceback (most recent call last):
>   File "cinder/tests/unit/test_volume.py", line 3029, in
> test_can_delete_errored_snapshot
> snapshot_obj = objects.Snapshot.get_by_id(self.context,
> snapshot_id)
>   File
> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 169,
> in wrapper
> result = fn(cls, context, *args, **kwargs)
>   File "cinder/objects/snapshot.py", line 130, in get_by_id
> expected_attrs=['metadata'])
>   File "cinder/objects/snapshot.py", line 112, in _from_db_object
> snapshot[name] = value
>   File
> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 691,
> in __setitem__
> setattr(self, name, value)
>   File
> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 70,
> in setter
> field_value = field.coerce(self, name, value)
>   File
> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py", line
> 183, in coerce
> return self._null(obj, attr)
>   File
> "/usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py", line
> 161, in _null
> raise ValueError(_("Field `%s' cannot be None") % attr)
> ValueError: Field `volume_id' cannot be None
>
> Both the testsuites run fine when i run them individually, as in the below
> is success:
>
> ./run_tests.sh -N cinder.tests.unit.test_remotefs - no errors
>
> ./run_tests.sh -N cinder.tests.unit.test_volume.VolumeTestCase - no errors
>
> So i modified my patch @ https://review.openstack.org/#/c/172808/ (Patch
> set 6) and
> removed all testcase i added in test_remotefs.py except one, so that we
> have lesser code to debug/deal with!
>
> See
> https://review.openstack.org/#/c/172808/6/cinder/tests/unit/test_remotefs.py
>
> Now when i disable test_create_snapshot_online_success then running both
> the suites work,
> but when i enable test_create_snapshot_online_success then it fails as
> above.
>
> I am unable to figure whats the connection between 
> test_create_snapshot_online_success
> in test_remotefs.py
> and VolumeTestCase.test_can_delete_errored_snapshot in test_volume.py
> failure
>
> Can someone help here ?
>
> thanx,
> deepak
>
>
>
> On Thu, Jun 4, 2015 at 1:37 PM, Deepak Shetty  wrote:
>
>> Hi Thang,
>>   Since you are working on Snapshot Objects, any idea on why the testcase
>> when run all by itself, works, but when run as part of the overall suite,
>> fails ?
>> This seems to be related to the Snapshot Objects, hence Ccing you.
>>
>> On Wed, Jun 3, 2015 at 9:54 PM, Deepak Shetty 
>> wrote:
>>
>>> Hi All,
>>>   I am hitting a strange issue when running Cinder unit tests against my
>>> patch @
>>> https://review.openstack.org/#/c/172808/5
>>>
>>> I have spent 1 day and haven't been successfull at figuring how/why my
>>> patch is causing it!
>>>
>>> All tests failing are part of VolumeTestCase suite and from the error
>>> (see below) it seems
>>> the Snapshot Object is complaining that 'volume_id' field is null (while
>>> it shouldn't be)
>>>
>>> An example error from the associated Jenkins run can be seen @
>>>
>>> http://logs.openstack.org/08/172808/5/check/gate-cinder-python27/0abd15e/console.html.gz#_2015-05-22_13_28_47_140
>>>
>>> I am seeing a total of 21 such errors.
>>>
>>> Its strange because, when I try to reproduce it locally in my devstack
>>> env, I see the below:
>>>
>>> 1) When i just run: ./run_tests.sh -N cinder.tests.unit.test_volume.
>>> VolumeTestCase
>>> all testcases pass
>>>
>>> 2) When i run 1 individual testcase: ./run_tests.sh -N
>>> cinder.tests.unit.test_volume.VolumeTestCase.test_delete_busy_snapshot
>>> that passes too
>>>
>>> 3) When i run : ./run_tests.sh -N
>>> I see 21 tests failing and all are failing with error similar to the
>>> below
>>>
>>> {0} cinder.tests.unit.test_volume.VolumeTestCase.test_delete_busy_snapshot
>>> [0.537366s] ... FAILED
>>>
>>> Captured traceback:
>>> ~~~
>>> Traceback (most recent call last):
>>>   File "cinder/tests/unit/test

[openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-04 Thread Vahid S Hashemian
Hi Serg,

Sorry, I seem to be having issues sending messages to the mailing list.

Thanks for your message. I can work on the blueprint spec. Just trying to get a 
good picture of related Murano processes and where the connection points to 
Heat-Translator should be.
And I agreed with your comment on MuranoPL. I think for TOSCA support and 
integration with Heat-Translator we need to consider HOT based packages.

Regards,
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud Labs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposing Assaf Muller for the Neutron Core Reviewer Team

2015-06-04 Thread Jaume Devesa
Congratulations Assaf!!

On 4 June 2015 at 17:45, Paul Michali  wrote:

> +100 Great addition! Congratulations Assaf!
>
> On Thu, Jun 4, 2015 at 11:41 AM Miguel Lavalle 
> wrote:
>
>> Congrats! Well deserved
>>
>> On Thu, Jun 4, 2015 at 8:50 AM, Assaf Muller  wrote:
>>
>>> Thank you.
>>>
>>> We have a lot of work ahead of us :)
>>>
>>>
>>> - Original Message -
>>> > It's a been a week since I proposed this, with no objections. Welcome
>>> to the
>>> > Neutron core reviewer team as the new QA Lieutenant Assaf!
>>> >
>>> > On Tue, Jun 2, 2015 at 12:35 PM, Maru Newby < ma...@redhat.com >
>>> wrote:
>>> >
>>> >
>>> > +1 from me, long overdue!
>>> >
>>> >
>>> > > On May 28, 2015, at 9:42 AM, Kyle Mestery < mest...@mestery.com >
>>> wrote:
>>> > >
>>> > > Folks, I'd like to propose Assaf Muller to be a member of the
>>> Neutron core
>>> > > reviewer team. Assaf has been a long time contributor in Neutron,
>>> and he's
>>> > > also recently become my testing Lieutenant. His influence and
>>> knowledge in
>>> > > testing will be critical to the team in Liberty and beyond. In
>>> addition to
>>> > > that, he's done some fabulous work for Neutron around L3 HA and DVR.
>>> Assaf
>>> > > has become a trusted member of our community. His review stats place
>>> him
>>> > > in the pack with the rest of the Neutron core reviewers.
>>> > >
>>> > > I'd also like to take this time to remind everyone that reviewing
>>> code is a
>>> > > responsibility, in Neutron the same as other projects. And core
>>> reviewers
>>> > > are especially beholden to this responsibility. I'd also like to
>>> point out
>>> > > that +1/-1 reviews are very useful, and I encourage everyone to
>>> continue
>>> > > reviewing code even if you are not a core reviewer.
>>> > >
>>> > > Existing Neutron cores, please vote +1/-1 for the addition of Assaf
>>> to the
>>> > > core reviewer team.
>>> > >
>>> > > Thanks!
>>> > > Kyle
>>> > >
>>> > > [1] http://stackalytics.com/report/contribution/neutron-group/180
>>> > >
>>> __
>>> > > OpenStack Development Mailing List (not for usage questions)
>>> > > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Jaume Devesa
Software Engineer at Midokura
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-04 Thread Vahid S Hashemian
undefined


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Dmitry Tantsur

On 06/04/2015 05:43 PM, Sean Dague wrote:

On 06/04/2015 11:27 AM, Dmitry Tantsur wrote:

On 06/04/2015 05:03 PM, Sean Dague wrote:

On 06/04/2015 10:50 AM, Dmitry Tantsur wrote:

On 06/04/2015 04:40 PM, Ruby Loo wrote:

Hi,

In Kilo, we introduced microversions but it seems to be a
work-in-progress. There is an effort now to add microversion into the
API-WG's guidelines, to provide a consistent way of using microversions
across OpenStack projects [1]. Specifically, in the context of this
email, there is a proposed guideline for when to bump the microversion
[2].


As I understand this guideline tells to bump microversion on every
change which I strongly -1 as usual. Reason: it's bump for the sake of
bump, without any direct benefit for users (no, API discoverability is
not one, because microversion do not solve it).

I'll post the same comment to the guideline.


Backwards compatible API adds with no user signaling is a fallacy
because it assumes the arrow of time flows only one way.

If at version 1.5 you have a resource that is

foo {
"bar": ...
}

And then you decide you want to add another attribute

foo {
"bar": ...
"baz": ...
}

And you don't bump the version, you'll get a set of users that use a
cloud with baz, and incorrectly assume that version 1.5 of the API means
that baz will always be there. Except, there are lots of clouds out
there, including ones that might be at the code commit before it was
added. Because there are lots of deploys in the world, your users can
effectively go back in time.

So now your API definition for version 1.5 is:

"foo, may or may not contain baz, and there is no way of you knowing if
it will until you try. good luck."

Which is pretty aweful.


Which is not very different from your definition. "Version 1.5 contains
feature xyz, unless it's disabled by the configuration or patched out
downstream. Well, 1.4 can also contain the feature, if downstream
backported it. So good luck again."


The whole point of interop is you can't call it OpenStack if you are
patching it downstream to break the upstream contract. Microversions are
a contract.

Downstream can hack code all they want, it's no longer OpenStack when
they do. If they are ok with it, that's fine. But them taking OpenStack
code and making something that's not OpenStack is beyond the scope of
the problem here. This is about good actors, acting in good faith, to
provide a consistent experience to application writers.


I disagree with all said above, but putting aside discussion about ideal 
vs real world, my point actually boils down to:
if you want feature discovery (or as you call it - contract), make it 
explicit. Create API for it. Here you're upset with users guessing 
features - and you invent one more way to guess them. Probably working 
in ideal world, but still a guessing game. And pretty inconvenient to 
use (to test, to document), I would say.


And within Ironic community (including people deploying from master), 
I'm still waiting to hear requests for such feature at all, but that's 
another question.




-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Which middleware does keystone use for authentication and authorization?

2015-06-04 Thread Amy Zhang
Hi all,

Does any one know which middleware does Openstack use to authenticate users
in Keystone in Kilo release?  I saw two middleware, one is in Keystone
client, the other is an independent directory. I know the previous version,
the middleware in keystone client is used, but in Kilo, I am not sure if
they are still using the middleware in keystone client or the other. Anyone
has any idea?

Thanks!

-- 
Best regards,
Amy (Yun Zhang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion guideline in API-WG

2015-06-04 Thread Devananda van der Veen
On Jun 4, 2015 12:00 AM, "Xu, Hejie"  wrote:
>
> Hi, guys,
>
> I’m working on adding Microversion into the API-WG’s guideline which make
sure we have consistent Microversion behavior in the API for user.
> The Nova and Ironic already have Microversion implementation, and as I
know Magnum https://review.openstack.org/#/c/184975/ is going to implement
Microversion also.
>
> Hope all the projects which support( or plan to) Microversion can join
the review of guideline.
>
> The Mircoversion specification(this almost copy from nova-specs):
https://review.openstack.org/#/c/187112
> And another guideline for when we should bump Mircoversion
https://review.openstack.org/#/c/187896/
>
> As I know, there already have a little different between Nova and
Ironic’s implementation. Ironic return min/max version when the requested
> version doesn’t support in server by http-headers. There isn’t such thing
in nova. But that is something for version negotiation we need for nova
also.
> Sean have pointed out we should use response body instead of http
headers, the body can includes error message. Really hope ironic team can
take a
> look at if you guys have compelling reason for using http headers.
>
> And if we think return body instead of http headers, we probably need
think about back-compatible also. Because Microversion itself isn’t
versioned.
> So I think we should keep those header for a while, does make sense?
>
> Hope we have good guideline for Microversion, because we only can change
Mircoversion itself by back-compatible way.

Ironic returns the min/max/current API version in the http headers for
every request.

Why would it return this information in a header on success and in the body
on failure? (How would this inconsistency benefit users?)

To be clear, I'm not opposed to *also* having a useful error message in the
body, but while writing the client side of api versioning, parsing the
range consistently from the response header is, IMO, better than requiring
a conditional.

-Deva
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposing Assaf Muller for the Neutron Core Reviewer Team

2015-06-04 Thread Somanchi Trinath
Congratulations Assaf ☺



From: Jaume Devesa [mailto:devv...@gmail.com]
Sent: Thursday, June 04, 2015 9:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Proposing Assaf Muller for the Neutron 
Core Reviewer Team

Congratulations Assaf!!

On 4 June 2015 at 17:45, Paul Michali 
mailto:p...@michali.net>> wrote:
+100 Great addition! Congratulations Assaf!

On Thu, Jun 4, 2015 at 11:41 AM Miguel Lavalle 
mailto:mig...@mlavalle.com>> wrote:
Congrats! Well deserved

On Thu, Jun 4, 2015 at 8:50 AM, Assaf Muller 
mailto:amul...@redhat.com>> wrote:
Thank you.

We have a lot of work ahead of us :)


- Original Message -
> It's a been a week since I proposed this, with no objections. Welcome to the
> Neutron core reviewer team as the new QA Lieutenant Assaf!
>
> On Tue, Jun 2, 2015 at 12:35 PM, Maru Newby < 
> ma...@redhat.com > wrote:
>
>
> +1 from me, long overdue!
>
>
> > On May 28, 2015, at 9:42 AM, Kyle Mestery < 
> > mest...@mestery.com > wrote:
> >
> > Folks, I'd like to propose Assaf Muller to be a member of the Neutron core
> > reviewer team. Assaf has been a long time contributor in Neutron, and he's
> > also recently become my testing Lieutenant. His influence and knowledge in
> > testing will be critical to the team in Liberty and beyond. In addition to
> > that, he's done some fabulous work for Neutron around L3 HA and DVR. Assaf
> > has become a trusted member of our community. His review stats place him
> > in the pack with the rest of the Neutron core reviewers.
> >
> > I'd also like to take this time to remind everyone that reviewing code is a
> > responsibility, in Neutron the same as other projects. And core reviewers
> > are especially beholden to this responsibility. I'd also like to point out
> > that +1/-1 reviews are very useful, and I encourage everyone to continue
> > reviewing code even if you are not a core reviewer.
> >
> > Existing Neutron cores, please vote +1/-1 for the addition of Assaf to the
> > core reviewer team.
> >
> > Thanks!
> > Kyle
> >
> > [1] http://stackalytics.com/report/contribution/neutron-group/180
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Jaume Devesa
Software Engineer at Midokura
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Tim Hinrichs
Inline.

On Thu, Jun 4, 2015 at 6:40 AM, Sean Dague  wrote:

> On 06/04/2015 08:52 AM, Adam Young wrote:
> > On 06/04/2015 06:32 AM, Sean Dague wrote:
> >> On 06/03/2015 08:40 PM, Tim Hinrichs wrote:
> >>> As long as there's some way to get the *declarative* policy from the
> >>> system (as a data file or as an API call) that sounds fine.  But I'm
> >>> dubious that it will be easy to keep the API call that returns the
> >>> declarative policy in sync with the actual code that implements that
> >>> policy.
> >> Um... why? Nova (or any other server project) needs to know what the
> >> currently computed policy is to actually enforce it internally. Turning
> >> around and spitting that back out on the wire is pretty straight
> forward.
> >>
> >> Is there some secret dragon I'm missing here?
> >
> > No.  But it is a significant bit of coding to do;  you would need to
> > crawl every API and make sure you hit every code path that could enforce
> > policy.
>
> Um, I don't understand that.
>
> I'm saying that you'd "GET https://my.nova.api.server/policy";
>
> And it would return basically policy.json. There is no crawling every
> bit, this is a standard entry point to return a policy representation.
> Getting all services to implement this would mean that Keystone could
> support interesting policy things with arbitrary projects, not just a
> small curated list, which is going to be really important in a big tent
> world. Monasca and  Murano are just as important to support here as Nova
> and Swift.
>


Definitely agree it'd be great to have an API call that returns policy.
The question that I think Adam and I are trying to answer is how do
projects implement that call?  We've (perhaps implicitly) suggested 3
different options.

1. Have a data file called say 'default_policy.json' that the oslo-policy
engine knows how to use (and override with policy.json or whatever).  The
policy-API call that returns policy then just reads in this file and
returns it.

2. Hard-code the return value of the Python function that implements the
policy-API call.  Different options as to how to do this.

3. Write code that automatically generates the policy-API result by
analyzing the code that implements the rest of the API calls (like
create_vm, delete_vm) and extracting the policy that they implement.  This
would require hitting all code paths that implement policy, etc.

I'm guessing you had option (2) in mind.  Is that right?  Assuming that's
the case I see two possibilities.

a. The policy-API call is used internally by Nova to check that an API call
is permitted before executing it.  (I'm talking conceptually.  Obviously
you'd not go through http.)

b. The policy-API call is never used internally; rather, each of the other
API calls (like create-server, delete-server) just use arbitrary Python
logic to decide whether an API call is permitted or not.  This requires the
policy-API call implementation to be kept in sync manually with the other
API calls to ensure the policy-API call returns the actual policy.

I'd be happy with (a) and doubt the practicality of (b).

Tim




> However, I've contemplated doing something like that with
> > oslo.policy already;  run a workload through a server with policy
> > non-enforcing (Permissive mode) and log the output to a file, then use
> > that output to modify either the policy or the delegations (role
> > assignments or trusts) used in a workflow.
> >
> > The Hard coded defaults worry me, though.  Nova is one piece (a big one,
> > admittedly) of a delicate dance across multiple (not-so-micro) services
> > that make up OpenStack.  Other serivces are going to take their cue from
> > what Nova does, and that would make the overall flow that much harder to
> > maintain.
>
> I don't understand why having hard coded defaults makes things harder,
> as long as they are discoverable. Defaults typically make things easier,
> because people then only change what they need, instead of setting a
> value for everything, having the deployment code update, and making
> their policy miss an important thing, or make something wrong because
> they didn't update it correctly at the same time as code.
>
> > I think we need to break some very ingrained patterns in out policy
> > enforcement.  I would worry that enforcing policy in code would give us
> > something that we could not work around.  Instead, I think we need to
> > ensure that the  Nova team leads the rest of the OpenStack core services
> > in setting up best practices, and that is primarily a communication
> > issue.  Getting to a common understanding of RBAC, and making it clear
> > how roles are modified on a per-api basis will make Nova more robust.
>
> So I feel like I understand the high level dynamic policy end game. I
> feel like what I'm proposing for policy engine with encoded defaults
> doesn't negatively impact that. I feel there is a middle chunk where
> perhaps we've got different concerns or different dragons that we see,
> and are 

Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Devananda van der Veen
On Jun 4, 2015 8:57 AM, "Monty Taylor"  wrote:
>
> On 06/04/2015 11:27 AM, Dmitry Tantsur wrote:
> > On 06/04/2015 05:03 PM, Sean Dague wrote:
> >> On 06/04/2015 10:50 AM, Dmitry Tantsur wrote:
> >>> On 06/04/2015 04:40 PM, Ruby Loo wrote:
>  Hi,
> 
>  In Kilo, we introduced microversions but it seems to be a
>  work-in-progress. There is an effort now to add microversion into the
>  API-WG's guidelines, to provide a consistent way of using
microversions
>  across OpenStack projects [1]. Specifically, in the context of this
>  email, there is a proposed guideline for when to bump the
microversion
>  [2].
> >>>
> >>> As I understand this guideline tells to bump microversion on every
> >>> change which I strongly -1 as usual. Reason: it's bump for the sake of
> >>> bump, without any direct benefit for users (no, API discoverability is
> >>> not one, because microversion do not solve it).
> >>>
> >>> I'll post the same comment to the guideline.
> >>
> >> Backwards compatible API adds with no user signaling is a fallacy
> >> because it assumes the arrow of time flows only one way.
> >>
> >> If at version 1.5 you have a resource that is
> >>
> >> foo {
> >>"bar": ...
> >> }
> >>
> >> And then you decide you want to add another attribute
> >>
> >> foo {
> >>"bar": ...
> >>"baz": ...
> >> }
> >>
> >> And you don't bump the version, you'll get a set of users that use a
> >> cloud with baz, and incorrectly assume that version 1.5 of the API
means
> >> that baz will always be there. Except, there are lots of clouds out
> >> there, including ones that might be at the code commit before it was
> >> added. Because there are lots of deploys in the world, your users can
> >> effectively go back in time.
> >>
> >> So now your API definition for version 1.5 is:
> >>
> >> "foo, may or may not contain baz, and there is no way of you knowing if
> >> it will until you try. good luck."
> >>
> >> Which is pretty aweful.
> >
> > Which is not very different from your definition. "Version 1.5 contains
> > feature xyz, unless it's disabled by the configuration or patched out
> > downstream. Well, 1.4 can also contain the feature, if downstream
> > backported it. So good luck again."
> >
> > If you allow to group features under one microversion, that becomes even
> > worse - you can have deployment that got microversion only partially.
> >
> > For example, that's what I would call API discoverability:
> >
> >  $ ironic has-capability foobar
> >  true
> >
> > and that's how it would play with versioning:
> >
> >  $ ironic --ironic-api-version 1.2 has-capability foobar
> >  false
> >  $ ironic --ironic-api-version 1.6 has-capability foobar
> >  true
> >
> > On the contrary, the only thing that microversion tells me is that the
> > server installation is based on a particular upstream commit.
> >
> > To me these are orthogonal problems, and I believe they should be solved
> > differently. Our disagreement is due to seeing them as one problem.
>
> We should stop doing this everywhere in OpenStack. It is the absolute
> worst experience ever.
>
> Stop allowing people to disable features with config. there is literally
> no user on the face of the planet for whom this is a positive thing.
>
> 1.5 should mean that your server has Set(A) of features. 1.6 should mean
> Set(A+B) - etc. There should be NO VARIATION and any variation on that
> should basically mean that the cloud in question is undeniably broken.
>
> I understand that vendors and operators keep wanting to wank around with
> their own narccisitic arrogance to "differentiate" from one another.
>
> STOP IT
>
> Seriously, it causes me GIANT amount of pain and quite honestly if I
> wasn't tied to using OpenStack because I work on it, I would have given
> up on it a long time ago because of evil stuff like this.
>
> So, seriously - let's grow up and start telling people that they do not
> get to pick and choose user-visible feature sets. If they have an unholy
> obsession with a particular backend technology that does not allow a
> public feature of the API to work, then they are deploying a broken
> cloud and they need to fix it.
>

So I just had dinner last night with a very large user of OpenStack (yes,
they exist)  whose single biggest request is that we stop "differentiating"
in the API. To them, any difference in the usability / behavior / API
between OpenStack deployment X and Y is a serious enough problem that it
will have two effects:
- vendor lock in
- they stop using OpenStack
And since avoiding single vendor lock in is important to them, well, really
it has only one result.

Tl;Dr; Monty is right. We MUST NOT vary the API or behaviour significantly
or non-discoverably between clouds. Or we simply won't have users.

> >>
> >> Looking at your comments in the WG repo you also seem to only be
> >> considering projects shipped at Major release versions (Kilo, Liberty).
> >> Which might be true of Red Hat's product policy, but it's not 

Re: [openstack-dev] Why do we drop branches? (WAS: Re: Targeting icehouse-eol?)

2015-06-04 Thread Jeremy Stanley
On 2015-06-04 16:23:12 +0200 (+0200), Ihar Hrachyshka wrote:
> Why do we even drop stable branches? If anything, it introduces
> unneeded problems to those who have their scripts/cookbooks set to
> chase those branches. They would need to switch to eol tag. Why not
> just leaving them sitting there, marked read only?
> 
> It becomes especially important now that we say that stable HEAD *is*
> a stable release.

It's doable, but we'll need ACL changes applied to every project
participating in this release model to reject new change submissions
and prevent anyone from approving them on every branch which reaches
its EOL date. These ACLs will also grow longer and longer over time
as we need to add new sections for each EOL branch.

Also, it seems to me like a "feature" if downstream consumers have
to take notice and explicitly adjust their tooling to intentionally
continue deploying a release for which we no longer provide support
and security updates.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which middleware does keystone use for authentication and authorization?

2015-06-04 Thread Steve Martinelli
You are referring to keystonemiddleware (
https://github.com/openstack/keystonemiddleware) - the Keystone team 
switched over many OpenStack services to use this new library in Kilo. It 
was originally a copy of the code in keystoneclient, the driving factor 
for the change was to decouple the middleware from keystoneclient. 

Thanks,

Steve Martinelli
OpenStack Keystone Core

Amy Zhang  wrote on 06/04/2015 12:04:54 PM:

> From: Amy Zhang 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 06/04/2015 12:05 PM
> Subject: [openstack-dev] Which middleware does keystone use for 
> authentication and authorization?
> 
> Hi all,
> 
> Does any one know which middleware does Openstack use to 
> authenticate users in Keystone in Kilo release?  I saw two 
> middleware, one is in Keystone client, the other is an independent 
> directory. I know the previous version, the middleware in keystone 
> client is used, but in Kilo, I am not sure if they are still using 
> the middleware in keystone client or the other. Anyone has any idea?
> 
> Thanks!
> 
> -- 
> Best regards,
> Amy (Yun Zhang)
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ENROLL state and changing node driver

2015-06-04 Thread Devananda van der Veen
On Jun 4, 2015 5:53 AM, "Dmitry Tantsur"  wrote:
>
> Hi!
>
> While working on the enroll spec [1], I got a thinking: within the new
state machine, when should we allow to change a node driver?
>
> My initial idea was to only allow driver change in ENROLL. Which sounds
good to me, but then it will be impossible to change a driver after moving
forward: we don't plan on having a way back to ENROLL from MANAGEABLE.
>
> What do you folks think we should do:
> 1. Leave driver field as it was before
> 2. Allow changing driver in ENROLL, do not allow later
> 3. Allow changing driver in ENROLL only, but create a way back from
MANAGEABLE to ENROLL ("unmanage"??)
>

What problem are you trying to solve? Because I don't see a problem with
the current behavior, and you're proposing breaking the API and requiring
users to follow a significantly more complex process should they need to
change what driver is in use for a node, and preventing ever doing that
while a workload is running...

-Deva

> Cheers,
> Dmitry
>
> [1] https://review.openstack.org/#/c/179151
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposing Assaf Muller for the Neutron Core Reviewer Team

2015-06-04 Thread Vikram Choudhary
Congrats Assaf ;)

On Thu, Jun 4, 2015 at 9:35 PM, Somanchi Trinath <
trinath.soman...@freescale.com> wrote:

>  Congratulations Assaf J
>
>
>
>
>
>
>
> *From:* Jaume Devesa [mailto:devv...@gmail.com]
> *Sent:* Thursday, June 04, 2015 9:25 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron] Proposing Assaf Muller for the
> Neutron Core Reviewer Team
>
>
>
> Congratulations Assaf!!
>
>
>
> On 4 June 2015 at 17:45, Paul Michali  wrote:
>
>  +100 Great addition! Congratulations Assaf!
>
>
>
> On Thu, Jun 4, 2015 at 11:41 AM Miguel Lavalle 
> wrote:
>
>  Congrats! Well deserved
>
>
>
> On Thu, Jun 4, 2015 at 8:50 AM, Assaf Muller  wrote:
>
> Thank you.
>
> We have a lot of work ahead of us :)
>
>
>
> - Original Message -
> > It's a been a week since I proposed this, with no objections. Welcome to
> the
> > Neutron core reviewer team as the new QA Lieutenant Assaf!
> >
> > On Tue, Jun 2, 2015 at 12:35 PM, Maru Newby < ma...@redhat.com > wrote:
> >
> >
> > +1 from me, long overdue!
> >
> >
> > > On May 28, 2015, at 9:42 AM, Kyle Mestery < mest...@mestery.com >
> wrote:
> > >
> > > Folks, I'd like to propose Assaf Muller to be a member of the Neutron
> core
> > > reviewer team. Assaf has been a long time contributor in Neutron, and
> he's
> > > also recently become my testing Lieutenant. His influence and
> knowledge in
> > > testing will be critical to the team in Liberty and beyond. In
> addition to
> > > that, he's done some fabulous work for Neutron around L3 HA and DVR.
> Assaf
> > > has become a trusted member of our community. His review stats place
> him
> > > in the pack with the rest of the Neutron core reviewers.
> > >
> > > I'd also like to take this time to remind everyone that reviewing code
> is a
> > > responsibility, in Neutron the same as other projects. And core
> reviewers
> > > are especially beholden to this responsibility. I'd also like to point
> out
> > > that +1/-1 reviews are very useful, and I encourage everyone to
> continue
> > > reviewing code even if you are not a core reviewer.
> > >
> > > Existing Neutron cores, please vote +1/-1 for the addition of Assaf to
> the
> > > core reviewer team.
> > >
> > > Thanks!
> > > Kyle
> > >
> > > [1] http://stackalytics.com/report/contribution/neutron-group/180
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Jaume Devesa
>
> Software Engineer at Midokura
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][third-party] Common-CI Virtual Sprint

2015-06-04 Thread Asselin, Ramy
Hi,

It was nice to meet many of you at the Vancouver Infra Working Session. Quite a 
bit of progress was made finding owners for some of the common-ci refactoring 
work [1] and puppet testing work. A few patches were proposed, reviewed, and 
some merged!

As we continue the effort over the coming weeks, I thought it would be helpful 
to schedule a  virtual sprint to complete the remaining tasks.

GOAL: By the end of the sprint, we should be able to set up a 3rd party CI 
system using the same puppet components that the OpenStack infrastructure team 
is using in its production CI system.

I proposed this in Tuesday's Infra meeting [1] there was general consensus that 
this would be valuable (if clear goals are documented) and that July 8 & 9 are 
good dates. (After the US & Canada July 1st and 4th holidays, not on Tuesday, 
and not near a Liberty Milestone)

I would like to get comments from a broader audience on the goals and dates. 

You can show interest by adding your name to the etherpad [3].

Thank you,
Ramy
irc: asselin



[1] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html
[2] 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-06-02-19.01.html
[3] https://etherpad.openstack.org/p/common-ci-sprint



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 agent rescheduling issue

2015-06-04 Thread Kevin Benton
Is there a way to parallelize the period tasks? I wanted to go this route
because I encountered cases where a bunch of routers would get scheduled to
l3 agents and they would all hit the server nearly simultaneously with a
sync routers task.

This could result in thousands of routers and their floating IPs being
retrieved, which would result in tens of thousands of SQL queries. During
this time, the agents would time out and have all their routers
rescheduled, leading to a downward spiral of doom.

I spent a bunch of time optimizing the sync routers calls on the l3 side so
it's hard to trigger this now, but I would be more comfortable if we didn't
depend on sync routers taking less time than the agent down time.

If we can have the heartbeats always running, it should solve both issues.
On Jun 4, 2015 8:56 AM, "Salvatore Orlando"  wrote:

> One reason for not sending the heartbeat from a separate greenthread could
> be that the agent is already doing it [1].
> The current proposed patch addresses the issue blindly - that is to say
> before declaring an agent dead let's wait for some more time because it
> could be stuck doing stuff. In that case I would probably make the
> multiplier (currently 2x) configurable.
>
> The reason for which state report does not occur is probably that both it
> and the resync procedure are periodic tasks. If I got it right they're both
> executed as eventlet greenthreads but one at a time. Perhaps then adding an
> initial delay to the full sync task might ensure the first thing an agent
> does when it comes up is sending a heartbeat to the server?
>
> On the other hand, while doing the initial full resync, is the  agent able
> to process updates? If not perhaps it makes sense to have it down until it
> finishes synchronisation.
>
> Salvatore
>
> [1]
> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/l3/agent.py#n587
>
> On 4 June 2015 at 16:16, Kevin Benton  wrote:
>
>> Why don't we put the agent heartbeat into a separate greenthread on the
>> agent so it continues to send updates even when it's busy processing
>> changes?
>> On Jun 4, 2015 2:56 AM, "Anna Kamyshnikova" 
>> wrote:
>>
>>> Hi, neutrons!
>>>
>>> Some time ago I discovered a bug for l3 agent rescheduling [1]. When
>>> there are a lot of resources and agent_down_time is not big enough
>>> neutron-server starts marking l3 agents as dead. The same issue has been
>>> discovered and fixed for DHCP-agents. I proposed a change similar to those
>>> that were done for DHCP-agents. [2]
>>>
>>> There is no unified opinion on this bug and proposed change, so I want
>>> to ask developers whether it worth to continue work on this patch or not.
>>>
>>> [1] - https://bugs.launchpad.net/neutron/+bug/1440761
>>> [2] - https://review.openstack.org/171592
>>>
>>> --
>>> Regards,
>>> Ann Kamyshnikova
>>> Mirantis, Inc
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why do we drop branches? (WAS: Re: Targeting icehouse-eol?)

2015-06-04 Thread ZZelle
argh

BLOCK push on others stable branches

[access "refs/heads/stable/*"]
  push =  block group "Anonymous Users"



On Thu, Jun 4, 2015 at 6:34 PM, ZZelle  wrote:

> We can do the opposite to avoid more and more ACLs:
>
> ALLOW push on some specific stable branches
>
> [access "refs/heads/stable/kilo"]
>   push = allow group ***-stable-maint
>
> [access "refs/heads/stable/juno"]
>   push = allow group ***-stable-maint
>
>
> BLOCK push on others stable branches
>
> [access "refs/heads/stable/juno"]
>   push =  block group "Anonymous Users"
>
>
> Cedric/ZZelle@IRC
>
>
>
>
>
> On Thu, Jun 4, 2015 at 6:15 PM, Jeremy Stanley  wrote:
>
>> On 2015-06-04 16:23:12 +0200 (+0200), Ihar Hrachyshka wrote:
>> > Why do we even drop stable branches? If anything, it introduces
>> > unneeded problems to those who have their scripts/cookbooks set to
>> > chase those branches. They would need to switch to eol tag. Why not
>> > just leaving them sitting there, marked read only?
>> >
>> > It becomes especially important now that we say that stable HEAD *is*
>> > a stable release.
>>
>> It's doable, but we'll need ACL changes applied to every project
>> participating in this release model to reject new change submissions
>> and prevent anyone from approving them on every branch which reaches
>> its EOL date. These ACLs will also grow longer and longer over time
>> as we need to add new sections for each EOL branch.
>>
>> Also, it seems to me like a "feature" if downstream consumers have
>> to take notice and explicitly adjust their tooling to intentionally
>> continue deploying a release for which we no longer provide support
>> and security updates.
>> --
>> Jeremy Stanley
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] "stubs" considered harmful in spec tests

2015-06-04 Thread Rich Megginson
Summary - In puppet module spec tests, do not use "stubs", which means 
the method will be called 0 or more times.  Instead, use "expects", 
which means the method must be called exactly 1 time, or some other more 
fine grained expectation method stubber.


Our puppet unit tests mostly use rspec, but use Mocha 
http://gofreerange.com/mocha/docs/index.html for object mocking and 
method stubbing.


I have already run into several cases where the spec test result is 
misleading because "stubs" was used instead of "expects", and I have 
spent a lot of time trying to figure out why a method was not called, 
because adding an expectation like


  provider.class.stubs(:openstack)
.with('endpoint', 'list', '--quiet', 
'--format', 'csv', [])
.returns('"ID","Region","Service Name","Service 
Type","Enabled","Interface","URL"

"2b38d77363194018b2b9b07d7e6bdc13","RegionOne","keystone","identity",True,"admin","http://127.0.0.1:5002/v3";
"3097d316c19740b7bc866c5cb2d7998b","RegionOne","keystone","identity",True,"internal","http://127.0.0.1:5001/v3";
"3445dddcae1b4357888ee2a606ca1585","RegionOne","keystone","identity",True,"public","http://127.0.0.1:5000/v3";
')

implies that "openstack endpoint list" will be called.

If at all possible, we should use an explicit expectation.  For example, 
in the above case, use "expects" instead:


  provider.class.expects(:openstack)
.with('endpoint', 'list', '--quiet', 
'--format', 'csv', [])
.returns('"ID","Region","Service Name","Service 
Type","Enabled","Interface","URL"

"2b38d77363194018b2b9b07d7e6bdc13","RegionOne","keystone","identity",True,"admin","http://127.0.0.1:5002/v3";
"3097d316c19740b7bc866c5cb2d7998b","RegionOne","keystone","identity",True,"internal","http://127.0.0.1:5001/v3";
"3445dddcae1b4357888ee2a606ca1585","RegionOne","keystone","identity",True,"public","http://127.0.0.1:5000/v3";
')

This means that "openstack endpoint list" must be called once, and only 
once.  For odd cases where you want a method to be called some certain 
number of times, or to return different values each time it is called, 
the Expectation class 
http://gofreerange.com/mocha/docs/Mocha/Expectation.html should be used 
to modify the initial expectation.


Unfortunately, I don't think we can just do a blanket 
"s/stubs/expects/g" in *_spec.rb, without incurring a lot of test 
failures.  So perhaps we don't have to do this right away, but I think 
future code reviews should -1 any spec file that uses "stubs" without a 
strong justification.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why do we drop branches? (WAS: Re: Targeting icehouse-eol?)

2015-06-04 Thread ZZelle
We can do the opposite to avoid more and more ACLs:

ALLOW push on some specific stable branches

[access "refs/heads/stable/kilo"]
  push = allow group ***-stable-maint

[access "refs/heads/stable/juno"]
  push = allow group ***-stable-maint


BLOCK push on others stable branches

[access "refs/heads/stable/juno"]
  push =  block group "Anonymous Users"


Cedric/ZZelle@IRC





On Thu, Jun 4, 2015 at 6:15 PM, Jeremy Stanley  wrote:

> On 2015-06-04 16:23:12 +0200 (+0200), Ihar Hrachyshka wrote:
> > Why do we even drop stable branches? If anything, it introduces
> > unneeded problems to those who have their scripts/cookbooks set to
> > chase those branches. They would need to switch to eol tag. Why not
> > just leaving them sitting there, marked read only?
> >
> > It becomes especially important now that we say that stable HEAD *is*
> > a stable release.
>
> It's doable, but we'll need ACL changes applied to every project
> participating in this release model to reject new change submissions
> and prevent anyone from approving them on every branch which reaches
> its EOL date. These ACLs will also grow longer and longer over time
> as we need to add new sections for each EOL branch.
>
> Also, it seems to me like a "feature" if downstream consumers have
> to take notice and explicitly adjust their tooling to intentionally
> continue deploying a release for which we no longer provide support
> and security updates.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Adam Young

On 06/04/2015 09:40 AM, Sean Dague wrote:

Is there some secret dragon I'm missing here?

>
>No.  But it is a significant bit of coding to do;  you would need to
>crawl every API and make sure you hit every code path that could enforce
>policy.

Um, I don't understand that.

I'm saying that you'd "GEThttps://my.nova.api.server/policy";
What would that return?  The default policy.json file that you ship?  Or 
would it be auto-generated based on enforcement in the code?


If it is auto-generated, you need to crawl the code, somehow, to 
generate that.


If it is policy.json, then you are not implementing the defaults in 
code, just returning the one managed by the CMS and deployed with the 
Service endpoint.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Sean Dague
On 06/04/2015 12:12 PM, Tim Hinrichs wrote:
> Inline.
> 
> On Thu, Jun 4, 2015 at 6:40 AM, Sean Dague  > wrote:
> 
> On 06/04/2015 08:52 AM, Adam Young wrote:
> > On 06/04/2015 06:32 AM, Sean Dague wrote:
> >> On 06/03/2015 08:40 PM, Tim Hinrichs wrote:
> >>> As long as there's some way to get the *declarative* policy from the
> >>> system (as a data file or as an API call) that sounds fine.  But I'm
> >>> dubious that it will be easy to keep the API call that returns the
> >>> declarative policy in sync with the actual code that implements that
> >>> policy.
> >> Um... why? Nova (or any other server project) needs to know what the
> >> currently computed policy is to actually enforce it internally. Turning
> >> around and spitting that back out on the wire is pretty straight 
> forward.
> >>
> >> Is there some secret dragon I'm missing here?
> >
> > No.  But it is a significant bit of coding to do;  you would need to
> > crawl every API and make sure you hit every code path that could enforce
> > policy.
> 
> Um, I don't understand that.
> 
> I'm saying that you'd "GET https://my.nova.api.server/policy";
> 
> And it would return basically policy.json. There is no crawling every
> bit, this is a standard entry point to return a policy representation.
> Getting all services to implement this would mean that Keystone could
> support interesting policy things with arbitrary projects, not just a
> small curated list, which is going to be really important in a big tent
> world. Monasca and  Murano are just as important to support here as Nova
> and Swift.
> 
> 
> 
> Definitely agree it'd be great to have an API call that returns policy. 
> The question that I think Adam and I are trying to answer is how do
> projects implement that call?  We've (perhaps implicitly) suggested 3
> different options.
> 
> 1. Have a data file called say 'default_policy.json' that the
> oslo-policy engine knows how to use (and override with policy.json or
> whatever).  The policy-API call that returns policy then just reads in
> this file and returns it. 
> 
> 2. Hard-code the return value of the Python function that implements the
> policy-API call.  Different options as to how to do this. 
> 
> 3. Write code that automatically generates the policy-API result by
> analyzing the code that implements the rest of the API calls (like
> create_vm, delete_vm) and extracting the policy that they implement. 
> This would require hitting all code paths that implement policy, etc.
> 
> I'm guessing you had option (2) in mind.  Is that right?  Assuming
> that's the case I see two possibilities.
> 
> a. The policy-API call is used internally by Nova to check that an API
> call is permitted before executing it.  (I'm talking conceptually. 
> Obviously you'd not go through http.)
> 
> b. The policy-API call is never used internally; rather, each of the
> other API calls (like create-server, delete-server) just use arbitrary
> Python logic to decide whether an API call is permitted or not.  This
> requires the policy-API call implementation to be kept in sync manually
> with the other API calls to ensure the policy-API call returns the
> actual policy.
> 
> I'd be happy with (a) and doubt the practicality of (b).

Right, I'm thinking 2 (a). There is some engine internally (presumably
part of oslo.policy) where we can feed it sources in order (sources
could be code structures or files on disk, we already support multi file
with current oslo.policy and incubator code):

  Base + Patch1 + Patch2 + ...

  policy.add(Base)
  policy.add(Patch1)
  policy.add(Patch2)

You can then call:

  policy.enforce(context, rulename, ...) like you do today, it knows
what it's doing.

And you can also call:

  for_export = policy.export()

To dump the computed policy back out. Which is a thing that doesn't
exist today. The "GET /policy" would just build the same policy engine,
which computes the final rule set, and exports it.

The bulk of the complicated code would be in oslo.policy, so shared.
Different projects have different wsgi stacks, so will have a bit of
different handling code for the request, but the fact that all the
interesting payload is a policy.export() means the development overhead
should be pretty minimal.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-04 Thread Dmitry Tantsur
So we should have told the folks to so developing agent? (Not sure why you
all think I'm talking about us) Maybe.

But anyway anyone deliberately ignores my points, so I'm done with this
discussion.
04 июня 2015 г. 18:17 пользователь "Devananda van der Veen" <
devananda@gmail.com> написал:

>
> On Jun 4, 2015 8:57 AM, "Monty Taylor"  wrote:
> >
> > On 06/04/2015 11:27 AM, Dmitry Tantsur wrote:
> > > On 06/04/2015 05:03 PM, Sean Dague wrote:
> > >> On 06/04/2015 10:50 AM, Dmitry Tantsur wrote:
> > >>> On 06/04/2015 04:40 PM, Ruby Loo wrote:
> >  Hi,
> > 
> >  In Kilo, we introduced microversions but it seems to be a
> >  work-in-progress. There is an effort now to add microversion into
> the
> >  API-WG's guidelines, to provide a consistent way of using
> microversions
> >  across OpenStack projects [1]. Specifically, in the context of this
> >  email, there is a proposed guideline for when to bump the
> microversion
> >  [2].
> > >>>
> > >>> As I understand this guideline tells to bump microversion on every
> > >>> change which I strongly -1 as usual. Reason: it's bump for the sake
> of
> > >>> bump, without any direct benefit for users (no, API discoverability
> is
> > >>> not one, because microversion do not solve it).
> > >>>
> > >>> I'll post the same comment to the guideline.
> > >>
> > >> Backwards compatible API adds with no user signaling is a fallacy
> > >> because it assumes the arrow of time flows only one way.
> > >>
> > >> If at version 1.5 you have a resource that is
> > >>
> > >> foo {
> > >>"bar": ...
> > >> }
> > >>
> > >> And then you decide you want to add another attribute
> > >>
> > >> foo {
> > >>"bar": ...
> > >>"baz": ...
> > >> }
> > >>
> > >> And you don't bump the version, you'll get a set of users that use a
> > >> cloud with baz, and incorrectly assume that version 1.5 of the API
> means
> > >> that baz will always be there. Except, there are lots of clouds out
> > >> there, including ones that might be at the code commit before it was
> > >> added. Because there are lots of deploys in the world, your users can
> > >> effectively go back in time.
> > >>
> > >> So now your API definition for version 1.5 is:
> > >>
> > >> "foo, may or may not contain baz, and there is no way of you knowing
> if
> > >> it will until you try. good luck."
> > >>
> > >> Which is pretty aweful.
> > >
> > > Which is not very different from your definition. "Version 1.5 contains
> > > feature xyz, unless it's disabled by the configuration or patched out
> > > downstream. Well, 1.4 can also contain the feature, if downstream
> > > backported it. So good luck again."
> > >
> > > If you allow to group features under one microversion, that becomes
> even
> > > worse - you can have deployment that got microversion only partially.
> > >
> > > For example, that's what I would call API discoverability:
> > >
> > >  $ ironic has-capability foobar
> > >  true
> > >
> > > and that's how it would play with versioning:
> > >
> > >  $ ironic --ironic-api-version 1.2 has-capability foobar
> > >  false
> > >  $ ironic --ironic-api-version 1.6 has-capability foobar
> > >  true
> > >
> > > On the contrary, the only thing that microversion tells me is that the
> > > server installation is based on a particular upstream commit.
> > >
> > > To me these are orthogonal problems, and I believe they should be
> solved
> > > differently. Our disagreement is due to seeing them as one problem.
> >
> > We should stop doing this everywhere in OpenStack. It is the absolute
> > worst experience ever.
> >
> > Stop allowing people to disable features with config. there is literally
> > no user on the face of the planet for whom this is a positive thing.
> >
> > 1.5 should mean that your server has Set(A) of features. 1.6 should mean
> > Set(A+B) - etc. There should be NO VARIATION and any variation on that
> > should basically mean that the cloud in question is undeniably broken.
> >
> > I understand that vendors and operators keep wanting to wank around with
> > their own narccisitic arrogance to "differentiate" from one another.
> >
> > STOP IT
> >
> > Seriously, it causes me GIANT amount of pain and quite honestly if I
> > wasn't tied to using OpenStack because I work on it, I would have given
> > up on it a long time ago because of evil stuff like this.
> >
> > So, seriously - let's grow up and start telling people that they do not
> > get to pick and choose user-visible feature sets. If they have an unholy
> > obsession with a particular backend technology that does not allow a
> > public feature of the API to work, then they are deploying a broken
> > cloud and they need to fix it.
> >
>
> So I just had dinner last night with a very large user of OpenStack (yes,
> they exist)  whose single biggest request is that we stop "differentiating"
> in the API. To them, any difference in the usability / behavior / API
> between OpenStack deployment X and Y is a serious enough problem that it

Re: [openstack-dev] [global-requirements][pbr] tarball and git requirements no longer supported in requirements.txt

2015-06-04 Thread Kevin Benton
+1. I had setup a CI for a third-party plugin and the easiest thing to do
to make sure it was running tests with the latest copy of the corresponding
neutron branch was to put the git URL in requirements.txt.

We wanted to always test the latest code so we had early detection of
failures. What's the appropriate way to do that without using a git
reference?

On Thu, Jun 4, 2015 at 2:06 AM, Ihar Hrachyshka  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 06/03/2015 11:08 PM, Robert Collins wrote:
> > Hi, right now there is a little used (e.g. its not in any active
> > project these days) previous feature of pbr/global-requirements:
> > we supported things that setuptools does not: to whit, tarball and
> > git requirements.
> >
> > Now, these things are supported by pip, so the implementation
> > involved recursing into pip from our setup.py (setup.py -> pbr ->
> > pip). What we exported into setuptools was only the metadata about
> > the dependency name. This meant that we were re-entering pip,
> > potentially many times - it was, to be blunt, awful.
> >
> > Fortunately we removed the recursive re-entry into pip in pbr 1.0.
> > This didn't remove the ability to parse requirements.txt files
> > that contain urls, but it does mean they are converted to the
> > simple dependency name when doing 'pip install .' in a project tree
> > (or pip install $projectname), and so they are effectively
> > unversioned - no lower and no upper bound. This works poorly in the
> > gate: please don't use tarball or git urls in requirements.txt (or
> > setup.cfg for that matter).
> >
> > We can still choose to use something from git or a tarball in test
> > jobs, *if* thats the right thing (which it rarely is: I'm just
> > being clear that the technical capability still exists)... but it
> > needs to be done outside of requirements.txt going forward. Its
> > also something that we can support with the new constraints system
> > if desired [which will operate globally once in place (it is an
> > extension of global-requirements)].
> >
> > One question that this raises, and this is why I wrote the email:
> > is there any need to support this at all:- can we say that we won't
> > use tarball/vcs support at all and block it as a policy step in
> > global requirements? AIUI both git and tarball support is
> > problematic for CI jobs due to the increased flakiness of depending
> > on network resources... so its actively harmful anyway.
> >
>
> Lots of Neutron modules, like advanced services, or out-of-tree
> plugins, rely on neutron code being checked out from git [1]. I don't
> say it's the way to go forward, and there were plans to stop relying
> on latest git to avoid frequent breakages, but it's not yet implemented.
>
> [1]:
> http://git.openstack.org/cgit/openstack/neutron-vpnaas/tree/tox.ini#n10
>
> Ihar
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJVcBUnAAoJEC5aWaUY1u57N2EH/jUFd0H9pQ7LApSAIlDTEl2v
> WR1EXnc9Vxf5nCWq/qmncj3OCpMDlgL/ZMrFu74LRTDbe38+16kh+Fb+FvBEPGA4
> ZkQC3gyg22Se/QcerTxdPil16hnT912Hr3E0cTuu/4ktyipPrVsO39N56Jbrb6WQ
> SRCrEohIg7C3c0NgFcvBGh+S4rNf8IKT1oLzKrRhSLzIE8lSeGa1GNnSXPAXk19/
> 2KIEnqBz3Q5J6umTprB5DFdxMe93Pj6jZmGIMFaHXYgG/yTdKz3zzGM3hpuLyGUQ
> kKYEzFJZ4vf2c6NBg//GYTcAkGjkM2QmAnS+uoztU5vm4QRkLgGcDCz29eQ5ufA=
> =6bUu
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Adam Young

On 06/04/2015 09:40 AM, Sean Dague wrote:

So I feel like I understand the high level dynamic policy end game. I
feel like what I'm proposing for policy engine with encoded defaults
doesn't negatively impact that. I feel there is a middle chunk where
perhaps we've got different concerns or different dragons that we see,
and are mostly talking past each other. And I don't know how to bridge
that. All the keystone specs I've dived into definitely assume a level
of understanding of keystone internals and culture that aren't obvious
from the outside.


Policy is not currently designed to be additive;  let's take the Nova rule||
||
||"get_network": "rule:admin_or_owner or rule:shared or rule:external or 
rule:context_is_advsvc"||

||
|FROM 
http://git.openstack.org/cgit/openstack/neutron/tree/etc/policy.json#n27|

||
|This pulls in |

"external": "field:networks:router:external=True",
|
Now, we have a single JSON file that implements this. Lets say that you 
ended up coding exactly this rule in python. What would that mean?  
Either you make some way of initializing oslo.policy from a Python 
object, or you enforce outside of Oslo.policy (custom nova Code).  If it 
is custom code, you  have to say "run oslo or run my logic" 
everywhere...you can see that this approach leads to fragementation of 
policy enforcement.


So, instead, you go the "initialize oslo from Python."  We currentl have 
the idea of multiple policy files in the directory, so you just treat 
the Python code as a file with either the lowest or highest ABC order, 
depending.  Now, each policy file gets read, and the rules are a 
hashtable, keyed by the rule name.  So both get_network and external are 
keys that get read in.  If 'overwrite' is set, it will only process the 
last set of rules (replaces all rules)  but I think what we want here is 
just update:


http://git.openstack.org/cgit/openstack/oslo.policy/tree/oslo_policy/policy.py#n361
Which would mix together the existing rules with the rules from the 
policy files.



So...what would your intention be with hardcoding the policy in Nova?  
That your rule gets overwritten with the rule that comes from the 
centralized policy store, or that you rule gets executed in addition to 
the rule from central?  Neither are going to get you what you want, 
which is "Make sure you can't break Nova by changing Policy"









|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] bp updates & spec maintenance

2015-06-04 Thread Andrew Woodward
Since the meeting was closed prior to having a discussion for this topic, I
will post my notes here as well.

I will start updating some of the BP's that have landed in 6.1 to reflect
their current status

There are a number of specs open for BP's that have code landed. We will
merge the current revisions of theses specs, if there are any other issues,
or revisions we will need to open a new CR for it and likely on 7.0

For specs open that didn't make 6.1. I will push revisions to move them to
7.0

For specs that landed that code didn't make 6.1, I will create reviews for
moving them to 7.0

Lastly, I will start to update bp's that are already planned to target to
7.0. If you have something that should be targeted please raise it on the
ML.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Sean Dague
On 06/04/2015 01:03 PM, Adam Young wrote:
> On 06/04/2015 09:40 AM, Sean Dague wrote:
>> So I feel like I understand the high level dynamic policy end game. I
>> feel like what I'm proposing for policy engine with encoded defaults
>> doesn't negatively impact that. I feel there is a middle chunk where
>> perhaps we've got different concerns or different dragons that we see,
>> and are mostly talking past each other. And I don't know how to bridge
>> that. All the keystone specs I've dived into definitely assume a level
>> of understanding of keystone internals and culture that aren't obvious
>> from the outside. 
> 
> Policy is not currently designed to be additive;  let's take the Nova rule||
> ||
> ||"get_network": "rule:admin_or_owner or rule:shared or rule:external or
> rule:context_is_advsvc"||
> ||
> |FROM
> http://git.openstack.org/cgit/openstack/neutron/tree/etc/policy.json#n27|
> ||
> |This pulls in |
> 
> "external": "field:networks:router:external=True",
> |
> Now, we have a single JSON file that implements this. Lets say that you
> ended up coding exactly this rule in python. What would that mean? 
> Either you make some way of initializing oslo.policy from a Python
> object, or you enforce outside of Oslo.policy (custom nova Code).  If it
> is custom code, you  have to say "run oslo or run my logic"
> everywhere...you can see that this approach leads to fragementation of
> policy enforcement.
> 
> So, instead, you go the "initialize oslo from Python."  We currentl have
> the idea of multiple policy files in the directory, so you just treat
> the Python code as a file with either the lowest or highest ABC order,
> depending.  Now, each policy file gets read, and the rules are a
> hashtable, keyed by the rule name.  So both get_network and external are
> keys that get read in.  If 'overwrite' is set, it will only process the
> last set of rules (replaces all rules)  but I think what we want here is
> just update:
> 
> http://git.openstack.org/cgit/openstack/oslo.policy/tree/oslo_policy/policy.py#n361
> Which would mix together the existing rules with the rules from the
> policy files. 
> 
> 
> So...what would your intention be with hardcoding the policy in Nova? 
> That your rule gets overwritten with the rule that comes from the
> centralized policy store, or that you rule gets executed in addition to
> the rule from central?  Neither are going to get you what you want,
> which is "Make sure you can't break Nova by changing Policy"

It gets overwritten by the central store.

And you are wrong, that gives me what I want, because we can emit a
WARNING in the logs if the patch is something crazy. The operators will
see it, and be able to fix it later.

I'm not trying to prevent people from changing their policy in crazy
ways. I'm trying to build in some safety net where we can detect it's
kind of a bad idea and emit that information a place that Operators can
see and sort out later, instead of pulling their hair out.

But you can only do that if you have encoded what's the default, plus
annotations about ways that changing the default are unwise.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 agent rescheduling issue

2015-06-04 Thread Assaf Muller


- Original Message -
> One reason for not sending the heartbeat from a separate greenthread could be
> that the agent is already doing it [1].
> The current proposed patch addresses the issue blindly - that is to say
> before declaring an agent dead let's wait for some more time because it
> could be stuck doing stuff. In that case I would probably make the
> multiplier (currently 2x) configurable.
> 
> The reason for which state report does not occur is probably that both it and
> the resync procedure are periodic tasks. If I got it right they're both
> executed as eventlet greenthreads but one at a time. Perhaps then adding an
> initial delay to the full sync task might ensure the first thing an agent
> does when it comes up is sending a heartbeat to the server?

There's a patch that is related to this issue:
https://review.openstack.org/#/c/186584/

I made a comment there where, at least to me, it makes a lot of sense to insert
a report_state call in the after_start method, right after the agent initializes
but before it performs the first full sync. So, right here before line 560:
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/agent.py#L560

That should help *some* of the issues discussed in this thread, but not all.

> 
> On the other hand, while doing the initial full resync, is the agent able to
> process updates? If not perhaps it makes sense to have it down until it
> finishes synchronisation.
> 
> Salvatore
> 
> [1]
> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/l3/agent.py#n587
> 
> On 4 June 2015 at 16:16, Kevin Benton < blak...@gmail.com > wrote:
> 
> 
> 
> 
> Why don't we put the agent heartbeat into a separate greenthread on the agent
> so it continues to send updates even when it's busy processing changes?
> On Jun 4, 2015 2:56 AM, "Anna Kamyshnikova" < akamyshnik...@mirantis.com >
> wrote:
> 
> 
> 
> Hi, neutrons!
> 
> Some time ago I discovered a bug for l3 agent rescheduling [1]. When there
> are a lot of resources and agent_down_time is not big enough neutron-server
> starts marking l3 agents as dead. The same issue has been discovered and
> fixed for DHCP-agents. I proposed a change similar to those that were done
> for DHCP-agents. [2]
> 
> There is no unified opinion on this bug and proposed change, so I want to ask
> developers whether it worth to continue work on this patch or not.
> 
> [1] - https://bugs.launchpad.net/neutron/+bug/1440761
> [2] - https://review.openstack.org/171592
> 
> --
> Regards,
> Ann Kamyshnikova
> Mirantis, Inc
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] How to deal with aborted image read?

2015-06-04 Thread Chris Friesen

On 06/04/2015 03:01 AM, Flavio Percoco wrote:

On 03/06/15 16:46 -0600, Chris Friesen wrote:

We recently ran into an issue where nova couldn't write an image file due to
lack of space and so just quit reading from glance.

This caused glance to be stuck with an open file descriptor, which meant that
the image consumed space even after it was deleted.

I have a crude fix for nova at "https://review.openstack.org/#/c/188179/";
which basically continues to read the image even though it can't write it.
That seems less than ideal for large images though.

Is there a better way to do this?  Is there a way for nova to indicate to
glance that it's no longer interested in that image and glance can close the
file?

If I've followed this correctly, on the glance side I think the code in
question is ultimately glance_store._drivers.filesystem.ChunkedFile.__iter__().


Actually, to be honest, I was quite confused by the email :P

Correct me if I still didn't understand what you're asking.

You ran out of space on the Nova side while downloading the image and
there's a file descriptor leak somewhere either in that lovely (sarcasm)
glance wrapper or in glanceclient.


The first part is correct, but the file descriptor is actually held by 
glance-api.


Just by reading your email and glancing your patch, I believe the bug
might be in glanceclient but I'd need to five into this. The piece of
code you'll need to look into is[0].

glance_store is just used server side. If that's what you meant -
glance is keeping the request and the ChunkedFile around - then yes,
glance_store is the place to look into.

[0]
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v1/images.py#L152


I believe what's happening is that the ChunkedFile code opens the file and 
creates the iterator.  Nova then starts iterating through the file.


If nova (or any other user of glance) iterates all the way through the file then 
the ChunkedFile code will hit the "finally" clause in __iter__() and close the 
file descriptor.


If nova starts iterating through the file and then stops (due to running out of 
room, for example), the ChunkedFile.__iter__() routine is left with an open file 
descriptor.  At this point deleting the image will not actually free up any space.


I'm not a glance guy so I could be wrong about the code.  The externally-visible 
data are:

1) glance-api is holding an open file descriptor to a deleted image file
2) If I kill glance-api the disk space is freed up.
3) If I modify nova to always finish iterating through the file the problem 
doesn't occur in the first place.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-04 Thread Yee, Guang
I am confused about the goal. Are we saying we should allow operators to modify 
the access policies but then warn them if they do? But if operators *intend* to 
modify the policies in order to fit their compliance/security needs, which is 
likely the case, aren't the warning messages confusing and counterintuitive?


Guang


-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: Thursday, June 04, 2015 10:16 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

On 06/04/2015 01:03 PM, Adam Young wrote:
> On 06/04/2015 09:40 AM, Sean Dague wrote:
>> So I feel like I understand the high level dynamic policy end game. I 
>> feel like what I'm proposing for policy engine with encoded defaults 
>> doesn't negatively impact that. I feel there is a middle chunk where 
>> perhaps we've got different concerns or different dragons that we 
>> see, and are mostly talking past each other. And I don't know how to 
>> bridge that. All the keystone specs I've dived into definitely assume 
>> a level of understanding of keystone internals and culture that 
>> aren't obvious from the outside.
> 
> Policy is not currently designed to be additive;  let's take the Nova 
> rule||
> ||
> ||"get_network": "rule:admin_or_owner or rule:shared or rule:external 
> ||or
> rule:context_is_advsvc"||
> ||
> |FROM
> http://git.openstack.org/cgit/openstack/neutron/tree/etc/policy.json#n
> 27|
> ||
> |This pulls in |
> 
> "external": "field:networks:router:external=True",
> |
> Now, we have a single JSON file that implements this. Lets say that 
> you ended up coding exactly this rule in python. What would that mean?
> Either you make some way of initializing oslo.policy from a Python 
> object, or you enforce outside of Oslo.policy (custom nova Code).  If 
> it is custom code, you  have to say "run oslo or run my logic"
> everywhere...you can see that this approach leads to fragementation of 
> policy enforcement.
> 
> So, instead, you go the "initialize oslo from Python."  We currentl 
> have the idea of multiple policy files in the directory, so you just 
> treat the Python code as a file with either the lowest or highest ABC 
> order, depending.  Now, each policy file gets read, and the rules are 
> a hashtable, keyed by the rule name.  So both get_network and external 
> are keys that get read in.  If 'overwrite' is set, it will only 
> process the last set of rules (replaces all rules)  but I think what 
> we want here is just update:
> 
> http://git.openstack.org/cgit/openstack/oslo.policy/tree/oslo_policy/p
> olicy.py#n361 Which would mix together the existing rules with the 
> rules from the policy files.
> 
> 
> So...what would your intention be with hardcoding the policy in Nova? 
> That your rule gets overwritten with the rule that comes from the 
> centralized policy store, or that you rule gets executed in addition 
> to the rule from central?  Neither are going to get you what you want, 
> which is "Make sure you can't break Nova by changing Policy"

It gets overwritten by the central store.

And you are wrong, that gives me what I want, because we can emit a WARNING in 
the logs if the patch is something crazy. The operators will see it, and be 
able to fix it later.

I'm not trying to prevent people from changing their policy in crazy ways. I'm 
trying to build in some safety net where we can detect it's kind of a bad idea 
and emit that information a place that Operators can see and sort out later, 
instead of pulling their hair out.

But you can only do that if you have encoded what's the default, plus 
annotations about ways that changing the default are unwise.

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Steven Dake (stdake)
Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Steven Dake (stdake)
My vote is +1 for a unified core team for all Magnum development which in the 
future will include the magnum-ui repo, the python-magnumclient repo, the 
magnum repo, and the python-k8sclient repo.

Regards
-steve

From: Steven Dake mailto:std...@cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, June 4, 2015 at 10:58 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need 
a vote from the core team

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >