Re: [openstack-dev] [app-catalog][infra] Stale entries

2015-06-05 Thread Georgy Okrokvertskhov
+1 for the periodic job. I don't know how it fits to the current OpenStack
infrastructure, but we definitely need this. As I know there is a place for
3rd party gates and CI\CD systems, so probably, the best way to do this for
now, is to setup this externally to the OpenStack infra.

Thanks
Gosha

On Fri, Jun 5, 2015 at 5:10 PM, Fox, Kevin M  wrote:

>  We already have one Glance image in the app catalog that is 404.
>
> Since the app catalog contains entries that are hosted outside of
> OpenStack, Gerrit hooks are not necessarily the right way to check that
> links still work, since they can break weeks/months later.
>
> We may need some kind of job that just runs periodically to check links
> are valid, keeps track of which are valid/invalid, and notifies the owners
> (via email?)
>
> Is there any infrastructure in place that could be used for this? Any
> other thoughts on how this situation might be improved?
>
> Thanks,
> Kevin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-05 Thread Tripp, Travis S
I also have an interest here and can approach it like Thai hoping to give some 
input and reviews as time allows this cycle.  Just left a few comments on your 
spec, Brad.

Thanks,
Travis


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Keystone] Glance and trusts

2015-06-05 Thread Bhandaru, Malini K
Continuing with David's example and the need to control access to a Swift 
object that Adam points out,

How about using the Glance token from glance-API service to glance-registry but 
carry along extra data in the call, namely user-id, domain, and public/private 
information, so the object can be access controlled.

Alternately and encapsulating token

> -- keeping it simple, only two levels.  This 
protects from on the cusp expired user-tokens.
Could check user quota before attempting the storage.

Should user not have paid dues, Glance knows which objects to garbage collect!

Regards
Malini

From: Adam Young [mailto:ayo...@redhat.com]
Sent: Friday, June 05, 2015 4:11 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Glance][Keystone] Glance and trusts

On 06/05/2015 10:39 AM, Dolph Mathews wrote:

On Thu, Jun 4, 2015 at 1:54 AM, David Chadwick 
mailto:d.w.chadw...@kent.ac.uk>> wrote:
I did suggest another solution to Adam whilst we were in Vancouver, and
this mirrors what happens in the real world today when I order something
from a supplier and a whole supply chain is involved in creating the end
product that I ordered. This is not too dissimilar to a user requesting
a new VM. Essentially each element in the supply chain trusts the two
adjacent elements. It has contracts with both its customers and its
suppliers to define the obligations of each party. When something is
ordered from it, it trusts the purchaser, and on the strength of this,
it will order from its suppliers. Each element may or may not know who
the original customer is, but if it needs to know, it trusts the
purchaser to tell it. Furthermore the customer does not need to delegate
any of his/her permissions to his/her supplier. If we used such a system
of trust between Openstack services, then we would not need delegation
of authority and "trusts" as they are implemented today. It could
significantly simplify the interactions between OpenStack services.

+1! I feel like this is the model that we started with in OpenStack, and have 
grown additional complexity over time without much benefit.

We could roll Glance into Nova, too, and get the same benefit.  There is a 
reason we have separate services.  GLance shoud not Trust Nova for all 
operations, just some.

David's example elides the fact that there  are checks built in to the supply 
chain system to prevent cheating.







regards
David

On 03/06/2015 21:03, Adam Young wrote:
> On 06/02/2015 12:57 PM, Mikhail Fedosin wrote:
>> Hello! I think it's a good time to discuss implementation of trusts in
>> Glance v2 and v3 api.
>>
>> Currently we have two different situations during image creation where
>> our token may expire, which leads to unsuccessful operation.
>>
>> First is connection between glance-api and glance-registry. In this
>> case we have a solution (https://review.openstack.org/#/c/29967/) -
>> use_user_token parameter in glance-api.conf, but it is True by default
>> . If it's changed to False then glance-api will use its own
>> credentials to authorize in glance-registry and it prevents many
>> possible issues with user token expiration. So, I'm interested if
>> there are some performance degradations if we change use_user_token to
>> False and what are the reasons against making it the default value.
>>
>> Second one is linked with Swift. Current implementation uploads chunks
>> one by one and requires authorization each time. It may lead to
>> problems: for example we have to upload 100 chunks, after 99th one,
>> token expired and glance can't upload the last one, catches an
>> exception and tries to remove stale chunks from storage. Of course it
>> will fail, because token is not valid anymore, and that's why there
>> will be 99 garbage objects in the storage.
>> With Single-tenant mode glance uses its own credentials to upload
>> files, so it's possible to create new connection on each chunk upload
>> or catch Unauthorized exception and recreate connections only in that
>> cases. But with Multi-tenant mode there is no way to do it, because
>> user credentials are required. So it seems that trusts is the only one
>> solution here.
> The problem with using trusts is that it would need to be created
> per-user, and that is going to be expensive.  It would be possible, as
> Heat does something of this nature:
>
> 1. User calls glance,
> 2. Glance creates a trust with some limitation, either time or number of
> uses
> 3.  Trusts are used for all operations with swift.
> 4. Glance should clean up the trust when it is complete.
>
> I don't love the solution, but I think it is the best we have.  Ideally
> the user would opt in to the trust, but in this case, it is kindof
> implicit by them calling the API.
>
>
> We should limit the trust creation to only have those roles (or a
> subset) on the token used to create the trust.
>
>
>
>
>> I would be happy to hear your opinions on that matter. If you know
>> other situations where trusts are useful or some ot

Re: [openstack-dev] [nova] [glance] How to deal with aborted image read?

2015-06-05 Thread Ian Cordasco


On 6/5/15, 02:55, "Flavio Percoco"  wrote:

>On 04/06/15 11:46 -0600, Chris Friesen wrote:
>>On 06/04/2015 03:01 AM, Flavio Percoco wrote:
>>>On 03/06/15 16:46 -0600, Chris Friesen wrote:
We recently ran into an issue where nova couldn't write an image file
due to
lack of space and so just quit reading from glance.

This caused glance to be stuck with an open file descriptor, which
meant that
the image consumed space even after it was deleted.

I have a crude fix for nova at
"https://review.openstack.org/#/c/188179/";
which basically continues to read the image even though it can't write
it.
That seems less than ideal for large images though.

Is there a better way to do this?  Is there a way for nova to indicate
to
glance that it's no longer interested in that image and glance can
close the
file?

If I've followed this correctly, on the glance side I think the code in
question is ultimately
glance_store._drivers.filesystem.ChunkedFile.__iter__().
>>>
>>>Actually, to be honest, I was quite confused by the email :P
>>>
>>>Correct me if I still didn't understand what you're asking.
>>>
>>>You ran out of space on the Nova side while downloading the image and
>>>there's a file descriptor leak somewhere either in that lovely (sarcasm)
>>>glance wrapper or in glanceclient.
>>
>>The first part is correct, but the file descriptor is actually held by
>>glance-api.
>>
>>>Just by reading your email and glancing your patch, I believe the bug
>>>might be in glanceclient but I'd need to five into this. The piece of
>>>code you'll need to look into is[0].
>>>
>>>glance_store is just used server side. If that's what you meant -
>>>glance is keeping the request and the ChunkedFile around - then yes,
>>>glance_store is the place to look into.
>>>
>>>[0]
>>>https://github.com/openstack/python-glanceclient/blob/master/glanceclien
>>>t/v1/images.py#L152
>>
>>I believe what's happening is that the ChunkedFile code opens the file
>>and creates the iterator.  Nova then starts iterating through the
>>file.
>>
>>If nova (or any other user of glance) iterates all the way through the
>>file then the ChunkedFile code will hit the "finally" clause in
>>__iter__() and close the file descriptor.
>>
>>If nova starts iterating through the file and then stops (due to
>>running out of room, for example), the ChunkedFile.__iter__() routine
>>is left with an open file descriptor.  At this point deleting the
>>image will not actually free up any space.
>>
>>I'm not a glance guy so I could be wrong about the code.  The
>>externally-visible data are:
>>1) glance-api is holding an open file descriptor to a deleted image file
>>2) If I kill glance-api the disk space is freed up.
>>3) If I modify nova to always finish iterating through the file the
>>problem doesn't occur in the first place.
>
>Gotcha, thanks for explaining. I think the problem is that there might
>be a reference leak and therefore the FD is kept opened. Probably the
>request interruption is not getting to the driver. I've filed this
>bug[0] so we can look into it.
>
>[0] https://bugs.launchpad.net/glance-store/+bug/1462235
>
>Flavio
>
>-- 
>@flaper87
>Flavio Percoco

So the problem is with how we use ResponseSerializer and the ChunkedFile
(https://git.openstack.org/cgit/openstack/glance/tree/glance/api/v2/image_d
ata.py#n222). I think the problem we'll have is that webob provides
nothing on a Response
(https://webob.readthedocs.org/en/latest/modules/webob.html#response) to
hook into so we can close the ChunkedFile.

I wonder if we used the body_file attribute if webob would close the file
when the response is closed (because I'm assuming that nova/glanceclient
are closing the response with which it's downloading the data).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opnfv-tech-discuss] [Tricircle] Polling for weekly team meeting

2015-06-05 Thread Zhipeng Huang
Hi Iben,

No worries about the name confusing :P Yes you are right Tricircle deals
with the use of, not the setup or config.

If I put my OPNFV hat on, I think we could definitely jointly investigate
some of the multi-site automated setup-config-deploy requirements, among
Pharos, IPv6 and Multisite projects.

Tricircle is just one, among many other projects/subprojects that try to
deal with across site OpenStack deployment, and after we have good enough
requirements, we should try to implement those on Tricircle, or Cell, or
Multi-region, or whatever upstream projects those requirements see fit.

Regarding your second question, Tricircle doesn't care about the underlying
OpenStack structure, because it just provides a middle layer of extended
communication. Think of it as a extension board :) So as long as we have
normal OpenStack APIs, we could work with Tricircle just fine .

On Fri, Jun 5, 2015 at 11:03 PM, Rodriguez, Iben  wrote:

>  Hello Joe,
>
> This is very cool. A few questions...
>
> Tricircle seems to deal mostly with the use of many environments and not
> their setup or configuration, is that right?
>
> We have a few multi-site projects already like pharos and IPv6. Can we
> make an assessment to understand where each one fits in the workflow of the
> platform lifecycle?
>
> Do the underclouds need to be similar for tricycle to function properly?
> What if some openstack are different from others? Are we just looking for
> api compatability or will certain features or functions also be needed?
>
> Many thanks for sharing this project. It is good to understand the
> concepts.
>
> I b e n
> 4087824726
>
> From: Zhipeng Huang
> Sent: Friday, June 5, 07:44
> Subject: [opnfv-tech-discuss] [openstack-dev][Tricircle] Polling for
> weeklyteam meeting
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: opnfv-tech-disc...@lists.opnfv.org
>
> Hi All,
>
> The Tricircle Project has been on stackforge for a while, and without much
> activities.
>
> Now we will completely restructure the code base to make it more community
> open source friendly, less corporate PoC looking hopefully :P
>
> At the mean time I want to call for attention for people who are
> interested in this project, to participate in a time poll for our weekly
> meeting:
>
> http://doodle.com/d7fvmgvrwv8y3bqv
>
> I would recommend UTC 13:00 coz it is the only few time period when all
> the continent are able to be awake (tougher on US tho).
>
> Please find more info on Tricircle at
> https://github.com/stackforge/tricircle (new code base would come in the
> next few weeks). It mainly aim to solve OpenStack deployment acorss
> multiple sites.
>
> Also depending on OPNFV Multisite Project's decision, Tricircle might be
> one of the upstream projects of Multisite, which aims at developing
> requirements for NFV multi-NFVI-PoPs VIM deployment. More info :
> https://wiki.opnfv.org/multisite  https://www.opnfv.org/arno
>
> --
>
> Zhipeng (Howard) Huang
>
> Standard Engineer
>
> IT Standard & Patent/IT Prooduct Line
>
> Huawei Technologies Co,. Ltd
>
> Email: huangzhip...@huawei.com
>
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
>
> Research Assistant
>
> Mobile Ad-Hoc Network Lab, Calit2
>
> University of California, Irvine
>
> Email: zhipe...@uci.edu
>
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>
>  Spirent Communications e-mail confidentiality.
> 
> This e-mail contains confidential and / or privileged information
> belonging to Spirent Communications plc, its affiliates and / or
> subsidiaries. If you are not the intended recipient, you are hereby
> notified that any disclosure, copying, distribution and / or the taking of
> any action based upon reliance on the contents of this transmission is
> strictly forbidden. If you have received this message in error please
> notify the sender by return e-mail and delete it from your system.
>
> Spirent Communications plc
> Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, United
> Kingdom.
> Tel No. +44 (0) 1293 767676
> Fax No. +44 (0) 1293 767677
>
> Registered in England Number 470893
> Registered at Northwood Park, Gatwick Road, Crawley, West Sussex, RH10
> 9XN, United Kingdom.
>
> Or if within the US,
>
> Spirent Communications,
> 27349 Agoura Road, Calabasas, CA, 91301, USA.
> Tel No. 1-818-676- 2300
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for

Re: [openstack-dev] [nova][all] Architecture Diagrams in ascii art?

2015-06-05 Thread Dmitry Borodaenko
On Fri, May 15, 2015 at 06:33:35PM -0700, Joe Gordon wrote:
> On Fri, May 15, 2015 at 2:27 PM, Joe Gordon  wrote:
> > On Thu, May 14, 2015 at 3:52 AM, John Garbutt 
> > wrote:
> >> Some great points make here.
> >>
> >> Lets try decide something, and move forward here.
> >>
> >> Key requirements seem to be:
> >> * we need something that gives us readable diagrams
> >> * if its not easy to edit, it will go stale
> >> * ideally needs to be source based, so it lives happily inside git
> >> * needs to integrate into our sphinx pipeline
> >> * ideally have an opensource editor for that format (import and
> >> export), for most platforms
> >>
> >> ascii art fails on many of these, but its always a trade off.
> >>
> >> Possible way forward:
> >> * lets avoid merging large hard to edit bitmap style images
> >> * nova-core reviewers can apply their judgement on merging source based
> >> formats
> >> * however it *must* render correctly in the generated html (see result
> >> of docs CI job)
> >>
> >> Trying out SVG, and possibly blockdiag, seem like the front runners.
> >> I don't think we will get consensus without trying them, so lets do that.
> >>
> >> Will that approach work?
> >>
> > Sounds like a great plan.
> 
> After further investigation in blockdiag, is useless for moderately complex
> diagrams.

You're right, in my experience blogdiag is useless even for relatively
simple diagrams.

Lately I've been using ascii2svg [0] with great success, it allows you
to have fairly complex freeform ascii art diagrams that render quite
nicely into SVG. As an example, here's how I integrated it into LaTeX
beamer [1], here's how the result looks in PDF [2]. I've also included
rendered files in the git repo [3], so that one doesn't have to install
a2s to update the text. Same approach can work just as fine with Sphinx.

[0] http://9vx.org/~dho/a2s/
[1] https://github.com/angdraug/beamer-fuel-ceph
[2] https://drive.google.com/a/mirantis.com/file/d/0BxYswyvIiAEZUEp4aWJPYVNjeU0
[3] https://github.com/angdraug/beamer-fuel-ceph/blob/master/fuel-components.svg

Before someone comes back to the argument that ascii art limits the
amount of complexity you can cram into a single diagram, I'd like to
point out that after several years of using Enterprise Architect to
maintain interlinked collections of wall-sized UML diagrams, I've come
to realization that when your diagram no longer fits on a terminal
window's worth of ascii art and can't remain legible when you fit it
into a presentation slide, it's too complex to be helpful in
understanding your architecture. When that happens, you have to stay on
a higher abstraction level and have separate diagrams for each lower
level component. If you can't do that, you have to fix the architecture.

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog][infra] Stale entries

2015-06-05 Thread Fox, Kevin M
We already have one Glance image in the app catalog that is 404.

Since the app catalog contains entries that are hosted outside of OpenStack, 
Gerrit hooks are not necessarily the right way to check that links still work, 
since they can break weeks/months later.

We may need some kind of job that just runs periodically to check links are 
valid, keeps track of which are valid/invalid, and notifies the owners (via 
email?)

Is there any infrastructure in place that could be used for this? Any other 
thoughts on how this situation might be improved?

Thanks,
Kevin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo v3 identity problems

2015-06-05 Thread Amy Zhang
Hi everyone,

Thanks for helping me. I did wrong filter of my mailing, so I missed the
topic. Thanks for Farhan forward the email to me.

Thanks *Dolph Mathews, you gave the point. Binggo!!! I didn't assign the
admin user to default domain. **I didn't have this problem when I worked
with icehouse version, I guess it didn't check the domain scope for admin
user back then. **After I assigned admin to default domain, it all works
correct now.  Thanks a lot. *

*Thanks **Lin Hua Cheng, yes, the problem is I missed the domain scope
token for admin user.*

*To **Rich Megginson: I am using the v3 sample policy file, which is *
https://github.com/openstack/keystone/blob/f6c01dd1673b290578e9fff063e27104412ffeda/etc/policy.v3cloudsample.json
For any change in policy file, you don't need to restart the service. You
only restart the service when you change the code in the server.

Thanks for all.

Amy Zhang

On Fri, Jun 5, 2015 at 3:17 PM, Farhan Patwa  wrote:

>
> Forwarded conversation
> Subject: [openstack-dev] Kilo v3 identity problems
> 
>
> From: *Amy Zhang* 
> Date: Wed, Jun 3, 2015 at 11:29 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
>
>
> Hi guys,
>
> I have installed Kilo and try to use identity v3. I am using v3 policy
> file. I changed the domain_id for cloud admin as "default". As cloud admin,
> I tried "openstack domain list" and got the error message saying that I was
> not authorized.
>
> The part I changed in policy.json:
>
> "cloud_admin": "rule:admin_required and domain_id:default",
>
>
> The error I got from "openstack domain list":
>
> ERROR: openstack You are not authorized to perform the requested action:
> identity:create_domain (Disable debug mode to suppress these details.)
> (HTTP 403) (Request-ID: req-2f42b1da-9933-4494-9b39-c1664d154377)
>
> Has anyone tried identity v3 in Kilo? Did you have this problem? Any
> suggestions?
>
> Thanks
> Amy
> --
> Best regards,
> Amy (Yun Zhang)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> From: *Rich Megginson* 
> Date: Wed, Jun 3, 2015 at 11:52 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
>
>
>  Can you paste your policy file somewhere?  Did you restart the keystone
> service after changing your policy?  Can you provide your exactly openstack
> command line arguments and/or the rc file you sourced into your shell
> environment before running openstack?
>
>
>  Thanks
> Amy
> --
> Best regards,
> Amy (Yun Zhang)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> From: *Lin Hua Cheng* 
> Date: Wed, Jun 3, 2015 at 1:00 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
>
>
> The command requires a domain scoped token.
>
> Did you set the environment variable so that OSC uses a domain scoped
> token? This can be done by providing OS_DOMAIN_NAME instead of
> OS_PROJECT_NAME.
>
> -Lin
>
> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> From: *Dolph Mathews* 
> Date: Wed, Jun 3, 2015 at 1:16 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
>
>
> I assume that by "v3 policy file" you're specifically referring to:
>
>
> https://github.com/openstack/keystone/blob/f6c01dd1673b290578e9fff063e27104412ffeda/etc/policy.v3cloudsample.json
>
> Which essentially illustrates enforcement of a much more powerful
> authorization model than most deployers are familiar with today. You'll
> need to create and consume a domain-based role assignment, for example (do
> you 

Re: [openstack-dev] [keystone][reseller] New way to get a project scoped token by name

2015-06-05 Thread Adam Young

On 06/05/2015 01:15 PM, Henry Nash wrote:
I am sure I have missed something along the way, but can someone 
explain to me why we need this at all.  Project names are unique 
within a domain, with the exception of the project that is acting as 
its domain (i.e. they can only every be two names clashing in a 
hierarchy at the domain level and below).  So why isn’t specifying 
“is_domain=True/False” sufficient in an auth scope along with the 
project name?


The limitation of " Project names are unique within a domain" is 
artificial and somethi8ng we should not be enforcing.  Names should only 
be unique within parent project.


This whole thing started by trying to distinguish a domain from a 
project within that domain that both have the same name. We can special 
case that, but it is not a great solution.






Henry

On 5 Jun 2015, at 18:02, Adam Young > wrote:


On 06/03/2015 05:05 PM, Morgan Fainberg wrote:

Hi David,

There needs to be some form of global hierarchy delimiter - well 
more to the point there should be a common one across OpenStack 
installations to ensure we are providing a good and consistent (and 
more to the point inter-operable) experience to our users. I'm 
worried a custom defined delimiter (even at the domain level) is 
going to make it difficult to consume this data outside of the 
context of OpenStack (there are applications that are written to use 
the APIs directly).
We have one already.  We are working JSON, and so instead of project 
name being a string, it can be an array.


Nothing else is backwards compatible.  Nothing else will ensure we 
don;t break exisiting deployments.


Moving forward, we should support DNS notation, but it has to be an 
opt in




The alternative is to explicitly list the delimiter in the project ( 
e.g. {"hierarchy": {"delim": ".", "domain.project.project2"}} ). The 
additional need to look up the delimiter / set the delimiter when 
creating a domain is likely to make for a worse user experience than 
selecting one that is not different across installations.


--Morgan

On Wed, Jun 3, 2015 at 12:19 PM, David Chadwick 
mailto:d.w.chadw...@kent.ac.uk>> wrote:




On 03/06/2015 14:54, Henrique Truta wrote:
> Hi David,
>
> You mean creating some kind of "delimiter" attribute in the domain
> entity? That seems like a good idea, although it does not
solve the
> problem Morgan's mentioned that is the global hierarchy delimiter.

There would be no global hierarchy delimiter. Each domain would
define
its own and this would be carried in the JSON as a separate
parameter so
that the recipient can tell how to parse hierarchical names

David

>
> Henrique
>
> Em qua, 3 de jun de 2015 às 04:21, David Chadwick
> mailto:d.w.chadw...@kent.ac.uk>
>> escreveu:
>
>
>
> On 02/06/2015 23:34, Morgan Fainberg wrote:
> > Hi Henrique,
> >
> > I don't think we need to specifically call out that we
want a
> domain, we
> > should always reference the namespace as we do today.
Basically, if we
> > ask for a project name we need to also provide it's
namespace (your
> > option #1). This clearly lines up with how we handle
projects in
> domains
> > today.
> >
> > I would, however, focus on how to represent the
namespace in a single
> > (usable) string. We've been delaying the work on this
for a while
> since
> > we have historically not provided a clear way to delimit the
> hierarchy.
> > If we solve the issue with "what is the delimiter"
between domain,
> > project, and subdomain/subproject, we end up solving the
usability
>
> why not allow the top level domain/project to define the
delimiter for
> its tree, and to carry the delimiter in the JSON as a new
parameter.
> That provides full flexibility for all languages and locales
>
> David
>
> > issues with proposal #1, and not breaking the current
behavior you'd
> > expect with implementing option #2 (which at face value
feels to
> be API
> > incompatible/break of current behavior).
> >
> > Cheers,
> > --Morgan
> >
> > On Tue, Jun 2, 2015 at 7:43 AM, Henrique Truta
> > mailto:henriquecostatr...@gmail.com>
> >
> 
>  >
> > Hi folks,
> >
> >
> > In Reseller[1], we’ll have the domains concept
merged into
> projects,
   

Re: [openstack-dev] [neutron] Service Chain project IRC meeting minutes - 06/04/2015

2015-06-05 Thread Paul Carver

Cathy,

Make sure to take note when Fall rolls around that "pacific time" is 
ambiguous. UTC does not observe daylight savings so a meeting at 1700UTC 
will be 10:00 PDT but 09:00 PST.


On 6/4/2015 5:17 PM, Cathy Zhang wrote:

Thanks for joining the service chaining meeting today! Sorry for the time 
confusion. We will correct the weekly meeting time to 1700UTC (10am pacific 
time) Thursday #openstack-meeting-4 on the OpenStack meeting page.

Meeting Minutes:
http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.html
Meeting Minutes (text): 
http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.txt
Meeting Log:
http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.log.html

The next meeting is scheduled for June 11 (same place and time).

Thanks,
Cathy




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Domain and Project naming

2015-06-05 Thread Adam Young

On 06/04/2015 10:49 PM, Dolph Mathews wrote:



On Wed, Jun 3, 2015 at 11:25 PM, Adam Young > wrote:


With Hierarchical Multitenantcy, we have the issue that a project
is currentl restricted in its naming further than it should be. 
The domain entity enforces that all project namess under the

domain domain be unique, but really what we should say is that all
projects under a single parent project be unique.  However, we
have, at present, an API which allows a user to specify the domain
either name or id and project again, either by name or ID, but
here we care only about the name.  This can be used either in
specifying the token, or in operations ion the project API.

We should change projec naming to be nestable, and since we don't
have a delimiter set, we should expect the names to be an array,
where today we might have:

"project": {
"domain": {
"id": "1789d1",
"name": "example.com "
},
"id": "263fd9",
"name": "project-x"
}

we should allow and expect:

"project": {
"domain": {
"id": "1789d1",
"name": "example.com "
},
"id": "263fd9",
"name": [ "grandpa", "dad", "daughter"]
}


What is the actual project name here,

In Python and JSON it is

[ "grandpa", "dad", "daughter"]

and how do I specify it using my existing OS_PROJECT_NAME environment 
variable?


Probalby the simplest would be to quote it, and use single quotes for 
the inner strings like this:



"[ 'grandpa', 'dad', 'daughter']"

for person in  "[ 'grandpa', 'dad', 'daughter']" ; do echo $person; done
[ 'grandpa', 'dad', 'daughter']


For the CLI, it might be possible to specify multiple values such as

--os-project-name= "grandpa" "dad" "daughter"
or
--os-project-name= "grandpa" --os-project-name="dad" 
--os-project-name="daughter"






This will, of course, break Horizon and lots of other things,
which means we need a reasonable way to display these paths.  The
typical UI approach is a breadcrumb trail, and I think something
where we put the segments of the path in the UI, each clickable,
should be understandable: I'll defer to the UX experts if this is
reasonable or not.

The alternative is that we attempt to parse the project names.
Since we have not reserved a delimeter, we will break someone
somewhere if we force one on people.


As an alternative, we should start looking in to following DNS
standards for naming projects and hosts.  While a domain should
not be required to be a DNS registred domain name, we should allow
for the case where a user wants that to be the case, and to
synchronize nam,ing across multiple clouds.  In order to enforce
this, we would have to have an indicator on a domain name that it
has been checked with DNS;  ideally, the user would add a special
SRV or Text record or something that Keystone could use to confirm
that the user has oked this domain name being used by this
cloud...or something perhaps with DNSSEC, checking that auser has
permission to assign a specific domain name to a set of resources
in the cloud.  If we do that, the projects under that domain
should also be valid DNS subzones, and the hosts either  FQDNs or
some alternate record...this would tie in Well with Designate.

Note that I am not saying "force this"  but rather "allow this" as
it will simplify the naming when bursting from cloud to cloud: 
the Domain and project names would then be synchronized via DNS

regardless of hosting provider.

As an added benefit, we could provide a SRV or TEXT record (or
some new URL type..I heard one is coming) that describes where to
find the home Keystone server for a specified domain...it would
work nicely with the K2K strategy.

If we go with DNS project naming, we can leave all project names
in a flat string.


Note that the DNS approach can work even if the user does not wish
to register their own DNS.  A hosting provider (I'll pick
dreamhost, cuz  I know they are listening) could say the each of
their tenants picks a user name...say that mine i admiyo,  they
would then create a subdomain of admiyo.dreamcompute.dreamhost.com
. All of my subprojects
would then get additional zones under that.  If I were then to
burst from there to Bluebox, the Keystone domain name would be the
one that I was assigned back at Dreamhost.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?sub

Re: [openstack-dev] [Glance][Keystone] Glance and trusts

2015-06-05 Thread Adam Young

On 06/05/2015 10:39 AM, Dolph Mathews wrote:


On Thu, Jun 4, 2015 at 1:54 AM, David Chadwick 
mailto:d.w.chadw...@kent.ac.uk>> wrote:


I did suggest another solution to Adam whilst we were in
Vancouver, and
this mirrors what happens in the real world today when I order
something
from a supplier and a whole supply chain is involved in creating
the end
product that I ordered. This is not too dissimilar to a user
requesting
a new VM. Essentially each element in the supply chain trusts the two
adjacent elements. It has contracts with both its customers and its
suppliers to define the obligations of each party. When something is
ordered from it, it trusts the purchaser, and on the strength of this,
it will order from its suppliers. Each element may or may not know who
the original customer is, but if it needs to know, it trusts the
purchaser to tell it. Furthermore the customer does not need to
delegate
any of his/her permissions to his/her supplier. If we used such a
system
of trust between Openstack services, then we would not need delegation
of authority and "trusts" as they are implemented today. It could
significantly simplify the interactions between OpenStack services.


+1! I feel like this is the model that we started with in OpenStack, 
and have grown additional complexity over time without much benefit.


We could roll Glance into Nova, too, and get the same benefit. There is 
a reason we have separate services.  GLance shoud not Trust Nova for all 
operations, just some.


David's example elides the fact that there  are checks built in to the 
supply chain system to prevent cheating.







regards
David

On 03/06/2015 21:03, Adam Young wrote:
> On 06/02/2015 12:57 PM, Mikhail Fedosin wrote:
>> Hello! I think it's a good time to discuss implementation of
trusts in
>> Glance v2 and v3 api.
>>
>> Currently we have two different situations during image
creation where
>> our token may expire, which leads to unsuccessful operation.
>>
>> First is connection between glance-api and glance-registry. In this
>> case we have a solution (https://review.openstack.org/#/c/29967/) -
>> use_user_token parameter in glance-api.conf, but it is True by
default
>> . If it's changed to False then glance-api will use its own
>> credentials to authorize in glance-registry and it prevents many
>> possible issues with user token expiration. So, I'm interested if
>> there are some performance degradations if we change
use_user_token to
>> False and what are the reasons against making it the default value.
>>
>> Second one is linked with Swift. Current implementation uploads
chunks
>> one by one and requires authorization each time. It may lead to
>> problems: for example we have to upload 100 chunks, after 99th one,
>> token expired and glance can't upload the last one, catches an
>> exception and tries to remove stale chunks from storage. Of
course it
>> will fail, because token is not valid anymore, and that's why there
>> will be 99 garbage objects in the storage.
>> With Single-tenant mode glance uses its own credentials to upload
>> files, so it's possible to create new connection on each chunk
upload
>> or catch Unauthorized exception and recreate connections only
in that
>> cases. But with Multi-tenant mode there is no way to do it, because
>> user credentials are required. So it seems that trusts is the
only one
>> solution here.
> The problem with using trusts is that it would need to be created
> per-user, and that is going to be expensive.  It would be
possible, as
> Heat does something of this nature:
>
> 1. User calls glance,
> 2. Glance creates a trust with some limitation, either time or
number of
> uses
> 3.  Trusts are used for all operations with swift.
> 4. Glance should clean up the trust when it is complete.
>
> I don't love the solution, but I think it is the best we have. 
Ideally

> the user would opt in to the trust, but in this case, it is kindof
> implicit by them calling the API.
>
>
> We should limit the trust creation to only have those roles (or a
> subset) on the token used to create the trust.
>
>
>
>
>> I would be happy to hear your opinions on that matter. If you know
>> other situations where trusts are useful or some other approaches
>> please share.
>>
>> Best regards,
>> Mike Fedosin
>>
>>
>>
>>
__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

>> ht

Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-05 Thread Bradley Jones (bradjone)
Initial draft of spec is out for review here 
https://review.openstack.org/188958 feedback welcome :)

Thanks,
Brad Jones

On 4 Jun 2015, at 22:30, Adrian Otto 
mailto:adrian.o...@rackspace.com>> wrote:

Team,

I have published a top level blueprint for a magnum-horizon-plugin:

https://blueprints.launchpad.net/magnum/+spec/magnum-horizon-plugin

My suggestion is that any contributor interested in contributing to this 
feature should subscribe to that blueprint, and record their intent to 
contribute in the Whiteboard of the BP. Furthermore, I suggest that any 
contributors who are a good fit for core reviewer duties for this effort 
subscribe to the blueprint and mark themselves as “Participation Essential” so 
I can get a clear picture of how to deal with grouping the related core 
reviewer team (or adding them to the current core group).

I think that this effort would benefit from a spec submitted as a review using 
the following template:

http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/liberty-template.rst

Adapt it for magnum (I have not contributed a spec template of our own yet. 
TODO.)

Contribute it here:

http://git.openstack.org/cgit/openstack/magnum/tree/specs

Thanks,

Adrian

On Jun 4, 2015, at 12:58 PM, Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Regarding Flow classifiers existing roposals

2015-06-05 Thread Miguel Angel Ajo
Hi,  

   Sounds good, but I could join if it’s at the very start of the meeting, 
after 17:15 UTC I’m unavailable on Thursdays.

   Let me know if that’d be possible.


Thanks in advance,
Miguel Ángel Ajo


On Friday 5 June 2015 at 20:36, Cathy Zhang wrote:

> Sure. I will add this item to the next IRC meeting agenda.  
>   
> Thanks,
> Cathy
>   
> From: Henry Fourie  
> Sent: Friday, June 05, 2015 11:27 AM
> To: Miguel Angel Ajo; Vikram Choudhary
> Cc: azama-y...@mxe.nes.nec.co.jp (mailto:azama-y...@mxe.nes.nec.co.jp); Cathy 
> Zhang; arma...@gmail.com (mailto:arma...@gmail.com); Dongfeng (C); Kyle 
> Mestery; openstack-dev@lists.openstack.org 
> (mailto:openstack-dev@lists.openstack.org); Dhruv Dhody; Kalyankumar Asangi
> Subject: RE: [neutron] Regarding Flow classifiers existing proposals  
>   
> Miguel,
>I agree, we can probably use the service-chaining meeting to discuss this.
> We can have it as an agenda item for the next meeting:
> http://eavesdrop.openstack.org/#Neutron_Service_Chaining_meeting
>   
> -  Louis
>   
>   
> From: Miguel Angel Ajo [mailto:mangel...@redhat.com]  
> Sent: Friday, June 05, 2015 1:42 AM
> To: Vikram Choudhary
> Cc: azama-y...@mxe.nes.nec.co.jp (mailto:azama-y...@mxe.nes.nec.co.jp); Henry 
> Fourie; Cathy Zhang; arma...@gmail.com (mailto:arma...@gmail.com); Dongfeng 
> (C); Kyle Mestery; openstack-dev@lists.openstack.org 
> (mailto:openstack-dev@lists.openstack.org); Dhruv Dhody; Kalyankumar Asangi
> Subject: [neutron] Regarding Flow classifiers existing proposals  
>   
>   
>  
>   
>  
> Added openstack-dev, where I believe this conversation must live.
>  
>   
>  
> I totally agree on this, thank you for bringing up this conversation. This is 
> not something we want to do for QoS this cycle, but probably next cycle.
>  
>   
>  
> Anyway, an unified data model and API to create/update classifiers will not 
> only be beneficial from the code duplication point of view, but will also 
> provide a better user experience.
>  
>   
>  
> I’m all for it.
>  
>   
>  
> Best regards,
>  
> Miguel Ángel Ajo
>  
>  
>   
>  
> On Friday 5 June 2015 at 09:57, Vikram Choudhary wrote:
> >  
> > Dear All,
> >  
> >  
> >   
> >  
> >  
> > There are multiple proposal floating around flow classifier rules for 
> > Liberty [1], [2] and [3].
> >  
> >  
> > I feel we all should work together and try to address all our use case 
> > having a unified framework rather than working separately achieving the 
> > same  goal.
> >  
> >  
> >   
> >  
> >  
> > Moreover, I can find the proposal for flow classifier as defined by the 
> > existing SFC [2] proposal is too generic and could address all the use 
> > cases by minor extension’s.
> >  
> >  
> >   
> >  
> >  
> > In this regard, I would like all to come forward, exchange their thoughts, 
> > work together and make it happen good the first go rather doing the same 
> > effort separately and end up in duplicating code & effort L.
> >  
> >  
> > I always feel less code will make our life happy in the long run ;)
> >  
> >  
> >   
> >  
> >  
> > Please let me know about your views.
> >  
> >  
> >   
> >  
> >  
> > [1] Add Neutron API extensions for packet forwarding
> >  
> >  
> >   https://review.openstack.org/#/c/186663/
> >  
> >  
> >   
> >  
> > [2] Neutron API for Service Chaining [Flow Filter resource]
> >  
> >   
> > https://review.openstack.org/#/c/177946/6/specs/liberty/neutron-api-for-service-chaining.rst
> >  
> >  
> >   
> >  
> >  
> > [3] QoS API Extension [Defines classifier rule in QoSRule. Classifier rule 
> > can really grow big in the long run]:
> >  
> >  
> >   
> > https://review.openstack.org/#/c/88599/10/specs/liberty/qos-api-extension.rst
> >  
> >  
> >   
> >  
> >  
> > Thanks
> >  
> >  
> > Vikram
> >  
> >  
> >  
> >  
> >  
>  
>   
>  
>  
>  
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  
>  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why do we drop branches? (WAS: Re: Targeting icehouse-eol?)

2015-06-05 Thread Alan Pevec
> Why do we even drop stable branches? If anything, it introduces
> unneeded problems to those who have their scripts/cookbooks set to
> chase those branches. They would need to switch to eol tag.

Because they would otherwise expect updates which will never come.
They should take a notice and removing a branch is a clear enough
message that they're now on their own.

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-05 Thread Joshua Harlow
Hopefully it's somewhat obvious to folks that altering the PBR version 
schema (yet again), breaks *all the people* from using it in a reliable 
manner, and if the goal of PBR is to bring reasonableness, well changing 
it in a way that breaks things seems like the-anti-pattern of PBR.


I know from building with anvil and building RPMs that we only used tags 
in https://github.com/stackforge/anvil/tree/master/conf/origins because 
the PBR version number would change format/style/other (and also 
partially because nothing existed that converted to a *sane* rpm version 
string, although this supposedly exists now via a new PBR function).


Thierry Carrez wrote:

Jeremy Stanley wrote:

On 2015-06-01 15:57:17 + (+), Jeremy Stanley wrote:
[...]

The biggest hurdle is that we'd need a separate upload job name
for those since the current version of Zuul lacks a way to run a
particular job for different branches in different pipelines (we'd
want to do versioned uploads for all pre-release and release
pipeline refs, but also for post pipeline refs only when the
branch name is like stable/.*).

Actually, scratch that. It's a bit more complicated since the post
pipeline isn't actually branch-relevant. We'd need to tweak the
tarball and wheel creation scripts to check the containing branch,
like we do for some proposal jobs. Still, I think it wouldn't be too
hard.


Exploring plan D, I was looking at the versions we currently generate on
stable branches and I think they would not convey the right message:

"2015.1.1.dev38"

- but there won't be a 2015.1.1 !
- but this is not "under development" !

I was wondering if we could switch to post-versioning on stable
branches, and basically generate:

"2015.1.0.post38"

... which would convey the right message.

I /think/ all it would take would be, as the first post-release commit
to the stable branch, to remove the preversion from setup.cfg (rather
than bump it to the next .1). I think pbr would switch to postversioning
in that case and generate postX versions from the last tag in the branch.

Not sure we would do that for stable/kilo, though, since we already
pushed 2015.1.1.devX versions in the wild.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] 6.1 Hard Code Freeze status update June 5th

2015-06-05 Thread Eugene Bogdanov

Hello everyone,

Unfortunately the recent patch for SWIFT issue [1] broke our staging 
tests last night. The problem is now resolved, new staging tests passed 
successfully. We need ~12 hours more to run the newly built ISO through 
SWARM tests, so we will have the results tomorrow morning MSK. If they 
are good, we'll declare HCF status tonight/early morning Saturday (June 
6th).


We are not sure why this patch passed through our internal control 
systems, we will do thorough root cause analysis early next week.


[1] https://bugs.launchpad.net/fuel/+bug/1462142

--
EugeneB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] RTNETLINK permission denied on setting IPv6 on br-ex

2015-06-05 Thread Angela Smith
We have been having this issue with devstack installation since Tuesday 6/2.  
On trying to add IPv6 address to br-ex, it fails with permission denied.
Here's the output from stack.sh:
+ IPV6_ROUTER_GW_IP=2001:db8::1
+ die_if_not_set 1299 IPV6_ROUTER_GW_IP 'Failure retrieving IPV6_ROUTER_GW_IP'
+ local exitcode=0
++ set +o
++ grep xtrace
+ local 'xtrace=set -o xtrace'
+ set +o xtrace
+ is_neutron_ovs_base_plugin
+ return 0
+ [[ True = \T\r\u\e ]]
++ _neutron_get_ext_gw_interface
++ [[ False == \T\r\u\e ]]
++ sudo ovs-vsctl set Bridge br-ex other_config:disable-in-band=true
++ echo br-ex
+ local ext_gw_interface=br-ex
+ local ipv6_cidr_len=64
+ sudo ip -6 addr add 2001:db8::2/64 dev br-ex
RTNETLINK answers: Permission denied
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Retrieval of the secret in text/plain format generated from Barbican order resource

2015-06-05 Thread Asha Seshagiri
Hi All ,

I am currently working on use cases for database and file Encryption.It is
really important for us to know since my Encryption use case would be using
the key generated by Barbican through order resource as the key.
The encyption algorithms would not accept the binary format and even if
converted into ascii , encoding is failing for few of the keys because some
characters exceeeds the range of ASCII and for some key  after encoding
length exceeds 32 bytes  which is the maximum length for doing AES
encryption.
It would be great if  someone could respond to the query ,since it would
block my further investigations on Encryption usecases using Babrican

Thanks and Regards,
Asha Seshagiri


On Wed, Jun 3, 2015 at 3:51 PM, Asha Seshagiri 
wrote:

> Hi All,
>
> Unable to retrieve the secret in text/plain format  generated from
> Barbican order resource
>
> Please find the curl command and responses for
>
> *Order creation with payload content type as text/plain* :
>
> [root@barbican-automation ~]# curl -X POST -H
> 'content-type:application/json' -H
> "X-Auth-Token:9b211b06669249bb89665df068828ee8" \
> > -d '{"type" : "key", "meta": {"name": "secretname2","algorithm": "aes",
> "bit_length":256,  "mode": "cbc", "payload_content_type": *"text/plain"*}}'
> -k https://169.53.235.102:9311/v1/orders
>
> *{"order_ref":
> "https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680
> *
> "}
>
> *Retrieval of the order by ORDER ID in order to get to know the secret
> generated by Barbican*
>
> [root@barbican-automation ~]# curl -H 'Accept: application/json' -H
> "X-Auth-Token:9b211b06669249bb89665df068828ee8" \
> > -k  
> > *https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680
> *
> {"status": "ACTIVE", "sub_status": "Unknown", "updated":
> "2015-06-03T19:08:13", "created": "2015-06-03T19:08:12", "order_ref": "
> https://169.53.235.102:9311/v1/orders/727113f9-fcda-4366-9f85-93b15edd4680";,
> "secret_ref": 
> "*https://169.53.235.102:9311/v1/secrets/5c25525d-a162-4b0b-9954-90c4ce426c4e
> *",
> "creator_id": "cedd848a8a9e410196793c601c03b99a", "meta": {"name":
> "secretname2", "algorithm": "aes", "payload_content_type": "text/plain",
> "mode": "cbc", "bit_length": 256, "expiration": null},
> "sub_status_message": "Unknown", "type": "key"}[root@barbican-automation
> ~]#
>
>
> *Retrieval of the secret failing with the content type text/plain*
>
> [root@barbican-automation ~]# curl -H 'Accept:text/plain' -H
> "X-Auth-Token:9b211b06669249bb89665df068828ee8" -k 
> *https://169.53.235.102:9311/v1/secrets/5c25525d-a162-4b0b-9954-90c4ce426c4e/payload
> *
> *{"code": 500, "description": "Secret payload retrieval failure seen -
> please contact site administrator.", "title": "Internal Server Error"}*
>
> I would like to know wheather this is a bug from Barbican side  since
> Barbican allows creation of the order resource with text/plain as the
> payload_content type but the retrieval of the secret payload with the
> content type text/plain is not allowed.
>
> Any help would highly be appreciated.
> --
> *Thanks and Regards,*
> *Asha Seshagiri*
>



-- 
*Thanks and Regards,*
*Asha Seshagiri*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Architecture Diagrams in ascii art?

2015-06-05 Thread Daniel Comnea
+1

On Fri, Jun 5, 2015 at 6:38 PM, Vilobh Meshram <
vilobhmeshram.openst...@gmail.com> wrote:

> +1
>
> On Thu, May 14, 2015 at 3:52 AM, John Garbutt 
> wrote:
>
>> On 12 May 2015 at 20:33, Sean Dague  wrote:
>> > On 05/12/2015 01:12 PM, Jeremy Stanley wrote:
>> >> On 2015-05-12 10:04:11 -0700 (-0700), Clint Byrum wrote:
>> >>> It's a nice up side. However, as others have pointed out, it's only
>> >>> capable of displaying the most basic pieces of the architecture.
>> >>>
>> >>> For higher level views with more components, I don't think ASCII art
>> >>> can provide enough bandwidth to help as much as a vector diagram.
>> >>
>> >> Of course, simply a reminder that just because you have one or two
>> >> complex diagram callouts in a document doesn't mean it's necessary
>> >> to also go back and replace your simpler ASCII art diagrams with
>> >> unintelligible (without rendering) SVG or Postscript or whatever.
>> >> Doing so pointlessly alienates at least some fraction of readers.
>> >
>> > Sure, it's all about trade offs.
>> >
>> > But I believe that statement implicitly assumes that ascii art diagrams
>> > do not alienate some fraction of readers. And I think that's a bad
>> > assumption.
>> >
>> > If we all feel alienated every time anyone does anything that's not
>> > exactly the way we would have done it, it's time to give up and pack it
>> > in. :) This thread specifically mentioned source based image formats
>> > that were internationally adopted open standards (w3c SVG, ISO ODG) that
>> > have free software editors that exist in Windows, Mac, and Linux
>> > (Inkscape and Open/LibreOffice).
>>
>> Some great points make here.
>>
>> Lets try decide something, and move forward here.
>>
>> Key requirements seem to be:
>> * we need something that gives us readable diagrams
>> * if its not easy to edit, it will go stale
>> * ideally needs to be source based, so it lives happily inside git
>> * needs to integrate into our sphinx pipeline
>> * ideally have an opensource editor for that format (import and
>> export), for most platforms
>>
>> ascii art fails on many of these, but its always a trade off.
>>
>> Possible way forward:
>> * lets avoid merging large hard to edit bitmap style images
>> * nova-core reviewers can apply their judgement on merging source based
>> formats
>> * however it *must* render correctly in the generated html (see result
>> of docs CI job)
>>
>> Trying out SVG, and possibly blockdiag, seem like the front runners.
>> I don't think we will get consensus without trying them, so lets do that.
>>
>> Will that approach work?
>>
>> Thanks,
>> John
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [javascript] Linters

2015-06-05 Thread Michael Krotscheck
Right now, there are several JS linters in use in OpenStack: JSHint, JSCS,
and Eslint. I really would like to only use one of them, so that I can
figure out how to sanely share the configuration between projects.

Can all those who have a strong opinion please stand up and state their
opinions?

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Regarding Flow classifiers existing proposals

2015-06-05 Thread Cathy Zhang
Sure. I will add this item to the next IRC meeting agenda.

Thanks,
Cathy

From: Henry Fourie
Sent: Friday, June 05, 2015 11:27 AM
To: Miguel Angel Ajo; Vikram Choudhary
Cc: azama-y...@mxe.nes.nec.co.jp; Cathy Zhang; arma...@gmail.com; Dongfeng (C); 
Kyle Mestery; openstack-dev@lists.openstack.org; Dhruv Dhody; Kalyankumar Asangi
Subject: RE: [neutron] Regarding Flow classifiers existing proposals

Miguel,
   I agree, we can probably use the service-chaining meeting to discuss this.
We can have it as an agenda item for the next meeting:
http://eavesdrop.openstack.org/#Neutron_Service_Chaining_meeting


-  Louis



From: Miguel Angel Ajo [mailto:mangel...@redhat.com]
Sent: Friday, June 05, 2015 1:42 AM
To: Vikram Choudhary
Cc: azama-y...@mxe.nes.nec.co.jp; Henry Fourie; Cathy Zhang; arma...@gmail.com; 
Dongfeng (C); Kyle Mestery; openstack-dev@lists.openstack.org; Dhruv Dhody; 
Kalyankumar Asangi
Subject: [neutron] Regarding Flow classifiers existing proposals



Added openstack-dev, where I believe this conversation must live.

I totally agree on this, thank you for bringing up this conversation. This is 
not something we want to do for QoS this cycle, but probably next cycle.

Anyway, an unified data model and API to create/update classifiers will not 
only be beneficial from the code duplication point of view, but will also 
provide a better user experience.

I’m all for it.

Best regards,
Miguel Ángel Ajo


On Friday 5 June 2015 at 09:57, Vikram Choudhary wrote:

Dear All,



There are multiple proposal floating around flow classifier rules for Liberty 
[1], [2] and [3].

I feel we all should work together and try to address all our use case having a 
unified framework rather than working separately achieving the same  goal.



Moreover, I can find the proposal for flow classifier as defined by the 
existing SFC [2] proposal is too generic and could address all the use cases by 
minor extension’s.



In this regard, I would like all to come forward, exchange their thoughts, work 
together and make it happen good the first go rather doing the same effort 
separately and end up in duplicating code & effort ☹.

I always feel less code will make our life happy in the long run ;)



Please let me know about your views.



[1] Add Neutron API extensions for packet forwarding

  https://review.openstack.org/#/c/186663/



[2] Neutron API for Service Chaining [Flow Filter resource]

  
https://review.openstack.org/#/c/177946/6/specs/liberty/neutron-api-for-service-chaining.rst



[3] QoS API Extension [Defines classifier rule in QoSRule. Classifier rule can 
really grow big in the long run]:

  
https://review.openstack.org/#/c/88599/10/specs/liberty/qos-api-extension.rst



Thanks

Vikram

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Regarding Flow classifiers existing proposals

2015-06-05 Thread Cathy Zhang
Hi Vikram,

Definitely. We should have one unified and generic flow classifier/filter 
(whatever name we call it) that can be used by all cases. Thank you for driving 
this!

Thanks,
Cathy

From: Miguel Angel Ajo [mailto:mangel...@redhat.com]
Sent: Friday, June 05, 2015 1:42 AM
To: Vikram Choudhary
Cc: azama-y...@mxe.nes.nec.co.jp; Henry Fourie; Cathy Zhang; arma...@gmail.com; 
Dongfeng (C); Kyle Mestery; openstack-dev@lists.openstack.org; Dhruv Dhody; 
Kalyankumar Asangi
Subject: [neutron] Regarding Flow classifiers existing proposals



Added openstack-dev, where I believe this conversation must live.

I totally agree on this, thank you for bringing up this conversation. This is 
not something we want to do for QoS this cycle, but probably next cycle.

Anyway, an unified data model and API to create/update classifiers will not 
only be beneficial from the code duplication point of view, but will also 
provide a better user experience.

I’m all for it.

Best regards,
Miguel Ángel Ajo


On Friday 5 June 2015 at 09:57, Vikram Choudhary wrote:

Dear All,



There are multiple proposal floating around flow classifier rules for Liberty 
[1], [2] and [3].

I feel we all should work together and try to address all our use case having a 
unified framework rather than working separately achieving the same  goal.



Moreover, I can find the proposal for flow classifier as defined by the 
existing SFC [2] proposal is too generic and could address all the use cases by 
minor extension’s.



In this regard, I would like all to come forward, exchange their thoughts, work 
together and make it happen good the first go rather doing the same effort 
separately and end up in duplicating code & effort ☹.

I always feel less code will make our life happy in the long run ;)



Please let me know about your views.



[1] Add Neutron API extensions for packet forwarding

  https://review.openstack.org/#/c/186663/



[2] Neutron API for Service Chaining [Flow Filter resource]

  
https://review.openstack.org/#/c/177946/6/specs/liberty/neutron-api-for-service-chaining.rst



[3] QoS API Extension [Defines classifier rule in QoSRule. Classifier rule can 
really grow big in the long run]:

  
https://review.openstack.org/#/c/88599/10/specs/liberty/qos-api-extension.rst



Thanks

Vikram

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] Multiple KMIP servers on a single barbican

2015-06-05 Thread Nathan Reller
> You would just store the url in the DTO.

You will need to have the KMIP secret store return the KMIP server
that handled the request in the metadata that is returned to Barbican
Core.

> each kmip server url would need to be in the barbican-api.conf file?

I would assume that would be true.

> I'm trying to stray away from making multiple active plugins

That is good because only one active secret store is allowed to be
active in Barbican. You can add this functionality to the KMIP secret
store plugin. You would need to change it to have a list of valid KMIP
servers. Then when a request is received to store or generate a key
then you would need some algorithm to know which KMIP appliance to
choose. Then do everything as normal. At the end then return the KMIP
URL in the metatdata. Then all other operations would retrieve the
server URL before communicating with the KMIP appliance. I hope that
makes sense. If not then I will be around on IRC.

-Nate

On Fri, Jun 5, 2015 at 1:41 PM, Christopher N Solis  wrote:
> Hey all.
>
> I wanted to get people's opinion on allowing barbican to talk to multiple
> KMIP servers.
> I got good advice from Nathan and John and it seems like it would be pretty
> easy keeping track of
> which secret resides in which KMIP applicance. You would just store the url
> in the DTO.
> However, in order for barbican to be aware of all KMIP servers wouldn't that
> mean that each
> kmip server url would need to be in the barbican-api.conf file? Or somewhere
> for barbican
> to know that multiple kmip servers are available? I noticed that there is a
> blueprint to introduce
> the concept of a single active and multiple inactive secret store plugins so
> I'm trying to stray away from
> making multiple active plugins.
>
> Regards,
>
>   Chris Solis
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-05 Thread Boris Pavlovic
Hi,

Maybe we should just give a try:

1) I will prepare all modifications out of infra and show demo
2) Get it in Infra as experimental feature
3) Try it in Rally
4) Share experience and if it is worth keep it or get rid of it.

Best regards,
Boris Pavlovic

On Fri, Jun 5, 2015 at 9:21 PM, Valeriy Ponomaryov  wrote:

> If such possibility appears then definitely will be people that will try
> it without weighing in to this discussion (like me).
>
> And, the only social problem is that such "maintainers" of
> project-sub-parts should be responsible enough. It is very likely that some
> vendor-specific-things maintainers can not be trusted to have "approval
> right" in general for some objective reasons (low-quality-code-writers,
> etc...). Hence, we should not automate right-granting, but should do it for
> review-process.
>
> So, I would like to have such possibility/feature in projects I
> participate as soon as there is big community for these projects.
>
> It worth to try, IMHO.
>
> --
> Kind Regards
> Valeriy Ponomaryov
> www.mirantis.com
> vponomar...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][scaling up development] Rally core team re-organization

2015-06-05 Thread Doug Hellmann
Excerpts from Boris Pavlovic's message of 2015-06-05 21:16:04 +0300:
> All,
> 
> Sorry for picking very bad words. I am not native speaker. =(
> 
> In Russia this word has as well another meaning like being "to aggressive
> against somebody".  I used it in that meaning. I didn't thought about
> sexually violated at all.
> Sorry sorry and one more time sorry! About using improper words..

Thank you, Boris. It's important for all of us to remember that
with cultural and language differences we need to take care with
phrasing to ensure we are communicating in a clear and friendly
way. I appreciate your explanation and apology.

Doug

> 
> 
> Best regards,
> Boris Pavlovic
> 
> On Fri, Jun 5, 2015 at 9:05 PM, Nikola Đipanov  wrote:
> 
> > On 06/05/2015 06:31 PM, Doug Hellmann wrote:
> > > Excerpts from Boris Pavlovic's message of 2015-06-05 20:03:44 +0300:
> > >> Hi stackers,
> > >>
> > >> Seems likes after stackforge/rally -> openstack/rally Rally project
> > started
> > >> being more attractive.
> > >> According recent stats we are on top 3 position (based on Patch sets
> > stats)
> > >>
> > http://stackalytics.com/?release=liberty&metric=patches&project_type=All
> > >> And if we compare half year ago we have 40 open reviews and now we have
> > >> about 140...
> > >> In other words we need to scale core reviewing process with keeping
> > >> quality.
> > >>
> > >> 
> > >>
> > >> I suggested in mailing thread:
> > >> [openstack-dev][all][infra][tc][ptl] Scaling up code review process
> > (subdir
> > >> cores)
> > >> To create special rules & ACL groups to have fully automated system.
> > >>
> > >> Instead of support I got raped by community.
> > >
> > > I understand that you feel that the negative response to your
> > > proposal was strong, but this is *COMPLETELY* inappropriate wording
> > > for this mailing list.
> > >
> >
> > +1000 - words have meaning and getting ones ideas criticized on a
> > mailing list by peers is not even in the same universe as being sexually
> > violated!!!
> >
> > IMHO this kind of behaviour needs to be sanctioned now, this kind of
> > language must not take root on this list ever!
> >
> > I have no idea what the process for this is but I am sure people who
> > know will respond soon.
> >
> > Not cool Boris! Not even a little bit.
> >
> > N.
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Regarding Flow classifiers existing proposals

2015-06-05 Thread Henry Fourie
Miguel,
   I agree, we can probably use the service-chaining meeting to discuss this.
We can have it as an agenda item for the next meeting:
http://eavesdrop.openstack.org/#Neutron_Service_Chaining_meeting


-  Louis



From: Miguel Angel Ajo [mailto:mangel...@redhat.com]
Sent: Friday, June 05, 2015 1:42 AM
To: Vikram Choudhary
Cc: azama-y...@mxe.nes.nec.co.jp; Henry Fourie; Cathy Zhang; arma...@gmail.com; 
Dongfeng (C); Kyle Mestery; openstack-dev@lists.openstack.org; Dhruv Dhody; 
Kalyankumar Asangi
Subject: [neutron] Regarding Flow classifiers existing proposals



Added openstack-dev, where I believe this conversation must live.

I totally agree on this, thank you for bringing up this conversation. This is 
not something we want to do for QoS this cycle, but probably next cycle.

Anyway, an unified data model and API to create/update classifiers will not 
only be beneficial from the code duplication point of view, but will also 
provide a better user experience.

I’m all for it.

Best regards,
Miguel Ángel Ajo


On Friday 5 June 2015 at 09:57, Vikram Choudhary wrote:

Dear All,



There are multiple proposal floating around flow classifier rules for Liberty 
[1], [2] and [3].

I feel we all should work together and try to address all our use case having a 
unified framework rather than working separately achieving the same  goal.



Moreover, I can find the proposal for flow classifier as defined by the 
existing SFC [2] proposal is too generic and could address all the use cases by 
minor extension’s.



In this regard, I would like all to come forward, exchange their thoughts, work 
together and make it happen good the first go rather doing the same effort 
separately and end up in duplicating code & effort ☹.

I always feel less code will make our life happy in the long run ;)



Please let me know about your views.



[1] Add Neutron API extensions for packet forwarding

  https://review.openstack.org/#/c/186663/



[2] Neutron API for Service Chaining [Flow Filter resource]

  
https://review.openstack.org/#/c/177946/6/specs/liberty/neutron-api-for-service-chaining.rst



[3] QoS API Extension [Defines classifier rule in QoSRule. Classifier rule can 
really grow big in the long run]:

  
https://review.openstack.org/#/c/88599/10/specs/liberty/qos-api-extension.rst



Thanks

Vikram

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-05 Thread Valeriy Ponomaryov
If such possibility appears then definitely will be people that will try it
without weighing in to this discussion (like me).

And, the only social problem is that such "maintainers" of
project-sub-parts should be responsible enough. It is very likely that some
vendor-specific-things maintainers can not be trusted to have "approval
right" in general for some objective reasons (low-quality-code-writers,
etc...). Hence, we should not automate right-granting, but should do it for
review-process.

So, I would like to have such possibility/feature in projects I participate
as soon as there is big community for these projects.

It worth to try, IMHO.

-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][scaling up development] Rally core team re-organization

2015-06-05 Thread Boris Pavlovic
All,

Sorry for picking very bad words. I am not native speaker. =(

In Russia this word has as well another meaning like being "to aggressive
against somebody".  I used it in that meaning. I didn't thought about
sexually violated at all.
Sorry sorry and one more time sorry! About using improper words..


Best regards,
Boris Pavlovic

On Fri, Jun 5, 2015 at 9:05 PM, Nikola Đipanov  wrote:

> On 06/05/2015 06:31 PM, Doug Hellmann wrote:
> > Excerpts from Boris Pavlovic's message of 2015-06-05 20:03:44 +0300:
> >> Hi stackers,
> >>
> >> Seems likes after stackforge/rally -> openstack/rally Rally project
> started
> >> being more attractive.
> >> According recent stats we are on top 3 position (based on Patch sets
> stats)
> >>
> http://stackalytics.com/?release=liberty&metric=patches&project_type=All
> >> And if we compare half year ago we have 40 open reviews and now we have
> >> about 140...
> >> In other words we need to scale core reviewing process with keeping
> >> quality.
> >>
> >> 
> >>
> >> I suggested in mailing thread:
> >> [openstack-dev][all][infra][tc][ptl] Scaling up code review process
> (subdir
> >> cores)
> >> To create special rules & ACL groups to have fully automated system.
> >>
> >> Instead of support I got raped by community.
> >
> > I understand that you feel that the negative response to your
> > proposal was strong, but this is *COMPLETELY* inappropriate wording
> > for this mailing list.
> >
>
> +1000 - words have meaning and getting ones ideas criticized on a
> mailing list by peers is not even in the same universe as being sexually
> violated!!!
>
> IMHO this kind of behaviour needs to be sanctioned now, this kind of
> language must not take root on this list ever!
>
> I have no idea what the process for this is but I am sure people who
> know will respond soon.
>
> Not cool Boris! Not even a little bit.
>
> N.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][scaling up development] Rally core team re-organization

2015-06-05 Thread Nikola Đipanov
On 06/05/2015 06:31 PM, Doug Hellmann wrote:
> Excerpts from Boris Pavlovic's message of 2015-06-05 20:03:44 +0300:
>> Hi stackers,
>>
>> Seems likes after stackforge/rally -> openstack/rally Rally project started
>> being more attractive.
>> According recent stats we are on top 3 position (based on Patch sets stats)
>> http://stackalytics.com/?release=liberty&metric=patches&project_type=All
>> And if we compare half year ago we have 40 open reviews and now we have
>> about 140...
>> In other words we need to scale core reviewing process with keeping
>> quality.
>>
>> 
>>
>> I suggested in mailing thread:
>> [openstack-dev][all][infra][tc][ptl] Scaling up code review process (subdir
>> cores)
>> To create special rules & ACL groups to have fully automated system.
>>
>> Instead of support I got raped by community.
> 
> I understand that you feel that the negative response to your
> proposal was strong, but this is *COMPLETELY* inappropriate wording
> for this mailing list.
> 

+1000 - words have meaning and getting ones ideas criticized on a
mailing list by peers is not even in the same universe as being sexually
violated!!!

IMHO this kind of behaviour needs to be sanctioned now, this kind of
language must not take root on this list ever!

I have no idea what the process for this is but I am sure people who
know will respond soon.

Not cool Boris! Not even a little bit.

N.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][scaling up development] Rally core team re-organization

2015-06-05 Thread Boris Pavlovic
Sylvain,

Are you sure your tone is appropriate once you read again your email ?


I don't see anything wrong in tone & email at all.
I just summarize for Rally team results of that thread. So they won't need
to read it.
And explain why we won't have sub cores and need trust model.
That's all.



> How can we help you understand that opinions help us to think about us and
> how we can be better ?


Some members from community can avoid doing things from list that I wrote.


>
> Do you think you have to apologize for such this email ?


Not yet. Do I have any reason for that?


Best regards,
Boris Pavlovic

On Fri, Jun 5, 2015 at 8:43 PM, Sylvain Bauza  wrote:

>
>
> Le 05/06/2015 19:03, Boris Pavlovic a écrit :
>
> Hi stackers,
>
>  Seems likes after stackforge/rally -> openstack/rally Rally project
> started being more attractive.
> According recent stats we are on top 3 position (based on Patch sets
> stats)
>  http://stackalytics.com/?release=liberty&metric=patches&project_type=All
> And if we compare half year ago we have 40 open reviews and now we have
> about 140...
> In other words we need to scale core reviewing process with keeping
> quality.
>
>  
>
>  I suggested in mailing thread:
> [openstack-dev][all][infra][tc][ptl] Scaling up code review process
> (subdir cores)
> To create special rules & ACL groups to have fully automated system.
>
>  Instead of support I got raped by community.
> Community was very polite & technical oriented in that thread and they
> said:
> 1) I am bad PTL,
> 2) I don't know how to do open source
> 3) Rally project sux
> 4) Rally project community sux
> 5) Rally project has troubles
> 6) A lot of more constructive critics
>
>  So Instead of having NICE fully automated system for subcores we will
> use ugly, not automated but very popular in community "trust" model based
> on excel.
>
>  
>
>
>  Solution:
> We will have single core team that can merge anything.
> But there will be two types of core (based on trust ;()
>
>  I created page in docs, that explains who is who:
> https://review.openstack.org/#/c/188843/1
>
>  Core reviewer
> --
> That are core for whole project
>
>  Plugin Core reviewer
> 
> That will just review/merge their component plugins and nothing else
>
>
>  I hope by end of this cycle each component will have own subteam which
> will resolve
> most of reviewing process scale issues..
>
>
>  Best regards,
> Boris Pavlovic
>
>
> Are you sure your tone is appropriate once you read again your email ?
>
> How can we help you understand that opinions help us to think about us and
> how we can be better ?
>
> Do you think you have to apologize for such this email ?
>
> -Sylvain
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][barbican] Regarding exposing X-Group-xxxx in token validation

2015-06-05 Thread Henry Nash
The one proviso is that in single LDAP situations, the cloud provider can chose 
(for backward compatibility reasons) to allow the underlying LDAP user/group 
ID….so we might want to advise this to be disabled (there’s a config switch to 
use the Public ID mapping for even this case).

Henry
> On 5 Jun 2015, at 18:19, Dolph Mathews  wrote:
> 
> 
> On Fri, Jun 5, 2015 at 11:50 AM, Henry Nash  > wrote:
> So I think that GroupID's are actually unique and safesince in the multi 
> LDAP case we provide an indirection already in Keystone and issue a "Public 
> ID" (this is true for bother users and groups), that we map to the underlying 
> local ID in the particular LDAP backend.
> 
> Oh, awesome! I didn't realize we did that for groups as well. So then, we're 
> safe exposing X-Group-Ids to services via keystonemiddleware.auth_token but 
> still not X-Group-Names (in any trivial form).
>  
> 
> 
> Henry 
> 
> 
> From: Dolph Mathews mailto:dolph.math...@gmail.com>>
> To:   "OpenStack Development Mailing List (not for usage questions)" 
>  >, Henry Nash 
> mailto:hen...@linux.vnet.ibm.com>>, Henry 
> Nash/UK/IBM@IBMGB
> Date: 05/06/2015 15:38
> Subject:  Re: [openstack-dev] [keystone][barbican] Regarding exposing 
> X-Group- in token validation
> 
> 
> 
> 
> 
> On Thu, Jun 4, 2015 at 10:17 PM, John Wood  > wrote: 
> Hello folks, 
> 
> Regarding option C, if group IDs are unique within a given cloud/context, and 
> these are discoverable by clients that can then set the ACL on a secret in 
> Barbican, then that seems like a viable option to me. As it is now, the user 
> information provided to the ACL is the user ID information as found in 
> X-User-Ids now, not user names.  
> 
> To Kevin’s point though, are these group IDs unique across domains now, or in 
> the future? If not the more complex tuples suggested could be used, but seem 
> more error prone to configure on an ACL. 
> 
> Well, that's a good question, because that depends on the backend, and our 
> backend architecture has recently gotten very complicated in this area. 
> 
> If groups are backed by SQL, then they're going to be globally unique UUIDs, 
> so the answer is always yes. 
> 
> If they're backed by LDAP, then actually it depends on LDAP, but the answer 
> should be yes. 
> 
> But the nightmare scenario we now support is domain-specific identity 
> drivers, where each domain can actually be configured to talk to a different 
> LDAP server. In that case, I don't think you can make any guarantees about 
> group ID uniqueness :( Instead, each domain could provide whatever IDs it 
> wants, and those might conflict with those of other domains. We have a 
> workaround for a similar issue with user IDs, but it hasn't been applied to 
> groups, leaving them quite broken in this scenario. I'd consider this to be 
> an issue we need to solve in Keystone, though, not something other projects 
> need to worry about. I'm hoping Henry Nash can chime in and correct me! 
>   
> 
> Thanks, 
> John 
> 
> From: , Kevin M mailto:kevin@pnnl.gov>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Date: Thursday, June 4, 2015 at 6:01 PM
> To: "OpenStack Development Mailing List (not for usage questions)" 
>  > 
> 
> Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
> X-Group- in token validation 
> 
> In Juno I tried adding a user in Domain A to group in Domain B. That 
> currently is not supported. Would be very handy though.
> 
> We're getting a ways from the original part of the thread, so I may have lost 
> some context, but I think the original question was, if barbarian can add 
> group names to their resource acls.
> 
> Since two administrative domains can issue the same group name, its not safe 
> I believe.
> 
> Simply ensuring the group name is associated with a user and the domain for 
> the user matches the domain for the group wouldn't work because someone with 
> control of their own domain can just make a 
> user and give them the group with the name they want and come take your 
> credentials.
> 
> What may be safe is for the barbican ACL to contain the group_id if they are 
> uniqueue across all domains, or take a domain_id & group_name pair for the 
> acl.
> 
> Thanks,
> Kevin
> 
> 
> From: Dolph Mathews [dolph.math...@gmail.com ]
> Sent: Thursday, June 04, 2015 1:41 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
> X-Group- in token validation
> 
> Problem! In writing a spec for this ( 
> https://review.openstack.org/#/c/188564/ 
>  ), I remembered that groups are 
> domain-specific entities, which complicates the problem o

Re: [openstack-dev] [rally][scaling up development] Rally core team re-organization

2015-06-05 Thread Sylvain Bauza



Le 05/06/2015 19:03, Boris Pavlovic a écrit :

Hi stackers,

Seems likes after stackforge/rally -> openstack/rally Rally project 
started being more attractive.
According recent stats we are on top 3 position (based on Patch sets 
stats)

http://stackalytics.com/?release=liberty&metric=patches&project_type=All
And if we compare half year ago we have 40 open reviews and now we 
have about 140...
In other words we need to scale core reviewing process with keeping 
quality.




I suggested in mailing thread:
[openstack-dev][all][infra][tc][ptl] Scaling up code review process 
(subdir cores)

To create special rules & ACL groups to have fully automated system.

Instead of support I got raped by community.
Community was very polite & technical oriented in that thread and they 
said:

1) I am bad PTL,
2) I don't know how to do open source
3) Rally project sux
4) Rally project community sux
5) Rally project has troubles
6) A lot of more constructive critics

So Instead of having NICE fully automated system for subcores we will 
use ugly, not automated but very popular in community "trust" model 
based on excel.





Solution:
We will have single core team that can merge anything.
But there will be two types of core (based on trust ;()

I created page in docs, that explains who is who:
https://review.openstack.org/#/c/188843/1

Core reviewer
--
That are core for whole project

Plugin Core reviewer

That will just review/merge their component plugins and nothing else


I hope by end of this cycle each component will have own subteam which 
will resolve

most of reviewing process scale issues..


Best regards,
Boris Pavlovic


Are you sure your tone is appropriate once you read again your email ?

How can we help you understand that opinions help us to think about us 
and how we can be better ?


Do you think you have to apologize for such this email ?

-Sylvain




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][scaling up development] Rally core team re-organization

2015-06-05 Thread Boris Pavlovic
Doug,


I understand that you feel that the negative response to your
> proposal was strong, but this is *COMPLETELY* inappropriate wording
> for this mailing list.


Okay, next time I will copy paste parts of emails from others
(with the even more offensive tone in my side)
Instead of making such list.


Best regards,
Boris Pavlovic

On Fri, Jun 5, 2015 at 8:31 PM, Doug Hellmann  wrote:

> Excerpts from Boris Pavlovic's message of 2015-06-05 20:03:44 +0300:
> > Hi stackers,
> >
> > Seems likes after stackforge/rally -> openstack/rally Rally project
> started
> > being more attractive.
> > According recent stats we are on top 3 position (based on Patch sets
> stats)
> > http://stackalytics.com/?release=liberty&metric=patches&project_type=All
> > And if we compare half year ago we have 40 open reviews and now we have
> > about 140...
> > In other words we need to scale core reviewing process with keeping
> > quality.
> >
> > 
> >
> > I suggested in mailing thread:
> > [openstack-dev][all][infra][tc][ptl] Scaling up code review process
> (subdir
> > cores)
> > To create special rules & ACL groups to have fully automated system.
> >
> > Instead of support I got raped by community.
>
> I understand that you feel that the negative response to your
> proposal was strong, but this is *COMPLETELY* inappropriate wording
> for this mailing list.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican] Multiple KMIP servers on a single barbican

2015-06-05 Thread Christopher N Solis
Hey all.

I wanted to get people's opinion on allowing barbican to talk to multiple
KMIP servers.
I got good advice from Nathan and John and it seems like it would be pretty
easy keeping track of
which secret resides in which KMIP applicance. You would just store the url
in the DTO.
However, in order for barbican to be aware of all KMIP servers wouldn't
that mean that each
kmip server url would need to be in the barbican-api.conf file? Or
somewhere for barbican
to know that multiple kmip servers are available? I noticed that there is a
blueprint to introduce
the concept of a single active and multiple inactive secret store plugins
so I'm trying to stray away from
making multiple active plugins.

Regards,

  Chris Solis__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Architecture Diagrams in ascii art?

2015-06-05 Thread Vilobh Meshram
+1

On Thu, May 14, 2015 at 3:52 AM, John Garbutt  wrote:

> On 12 May 2015 at 20:33, Sean Dague  wrote:
> > On 05/12/2015 01:12 PM, Jeremy Stanley wrote:
> >> On 2015-05-12 10:04:11 -0700 (-0700), Clint Byrum wrote:
> >>> It's a nice up side. However, as others have pointed out, it's only
> >>> capable of displaying the most basic pieces of the architecture.
> >>>
> >>> For higher level views with more components, I don't think ASCII art
> >>> can provide enough bandwidth to help as much as a vector diagram.
> >>
> >> Of course, simply a reminder that just because you have one or two
> >> complex diagram callouts in a document doesn't mean it's necessary
> >> to also go back and replace your simpler ASCII art diagrams with
> >> unintelligible (without rendering) SVG or Postscript or whatever.
> >> Doing so pointlessly alienates at least some fraction of readers.
> >
> > Sure, it's all about trade offs.
> >
> > But I believe that statement implicitly assumes that ascii art diagrams
> > do not alienate some fraction of readers. And I think that's a bad
> > assumption.
> >
> > If we all feel alienated every time anyone does anything that's not
> > exactly the way we would have done it, it's time to give up and pack it
> > in. :) This thread specifically mentioned source based image formats
> > that were internationally adopted open standards (w3c SVG, ISO ODG) that
> > have free software editors that exist in Windows, Mac, and Linux
> > (Inkscape and Open/LibreOffice).
>
> Some great points make here.
>
> Lets try decide something, and move forward here.
>
> Key requirements seem to be:
> * we need something that gives us readable diagrams
> * if its not easy to edit, it will go stale
> * ideally needs to be source based, so it lives happily inside git
> * needs to integrate into our sphinx pipeline
> * ideally have an opensource editor for that format (import and
> export), for most platforms
>
> ascii art fails on many of these, but its always a trade off.
>
> Possible way forward:
> * lets avoid merging large hard to edit bitmap style images
> * nova-core reviewers can apply their judgement on merging source based
> formats
> * however it *must* render correctly in the generated html (see result
> of docs CI job)
>
> Trying out SVG, and possibly blockdiag, seem like the front runners.
> I don't think we will get consensus without trying them, so lets do that.
>
> Will that approach work?
>
> Thanks,
> John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][scaling up development] Rally core team re-organization

2015-06-05 Thread Doug Hellmann
Excerpts from Boris Pavlovic's message of 2015-06-05 20:03:44 +0300:
> Hi stackers,
> 
> Seems likes after stackforge/rally -> openstack/rally Rally project started
> being more attractive.
> According recent stats we are on top 3 position (based on Patch sets stats)
> http://stackalytics.com/?release=liberty&metric=patches&project_type=All
> And if we compare half year ago we have 40 open reviews and now we have
> about 140...
> In other words we need to scale core reviewing process with keeping
> quality.
> 
> 
> 
> I suggested in mailing thread:
> [openstack-dev][all][infra][tc][ptl] Scaling up code review process (subdir
> cores)
> To create special rules & ACL groups to have fully automated system.
> 
> Instead of support I got raped by community.

I understand that you feel that the negative response to your
proposal was strong, but this is *COMPLETELY* inappropriate wording
for this mailing list.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-05 Thread Clint Byrum
Excerpts from Thierry Carrez's message of 2015-06-05 05:46:07 -0700:
> So.. summarizing the various options again:
> 

Thanks for the excellent summary Thierry
> Plan C
> Let projects randomly tag point releases whenever
> (-) Still a bit costly in terms of herding cats
> 

I feel like plan C is the smallest change one can make while still moving
toward lowering the project-wide maintenance costs of stable branches.

I don't know if I saw this highlighted enough during the discussion phase,
but the reason this might be a good idea is that stable releases, unlike
"trunk", have a very different purpose and mode.

Unlike with the coordinated release of many projects all at once, we
are not trying to make sure the pieces still fit together. We've been
careful, in stable release policy, to be sure that we are minimizing
changes, so I think coordinating them is unnecessary overhead.

However, releasing all the commits every time does also mean that there's
no signal to downstream that now is a good time to pull this stable
branch and incorporate it into your systems. Even if each project ends up
releasing weekly right after their meeting in a semi-automatic fashion,
this means that users and packagers can attend the meeting, read the
minutes, and have a passive communication stream that doesn't require
them to read and grok every single commit whenever it might be approved.

Finally, I don't think this is still costly in terms of herding cats,
because we have very few cats actually grazing in the stable branch
pastures. I suggest that stable branch maintainers simply get used to
releasing on a weekly basis if there are new commits, and use that as
an opportunity to highlight the excellent work their community has done.
If they can't do that now, lets empower them to tag and release and send
release announcements.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][barbican] Regarding exposing X-Group-xxxx in token validation

2015-06-05 Thread Dolph Mathews
On Fri, Jun 5, 2015 at 11:50 AM, Henry Nash  wrote:

> So I think that GroupID's are actually unique and safesince in the
> multi LDAP case we provide an indirection already in Keystone and issue a
> "Public ID" (this is true for bother users and groups), that we map to the
> underlying local ID in the particular LDAP backend.


Oh, awesome! I didn't realize we did that for groups as well. So then,
we're safe exposing X-Group-Ids to services via
keystonemiddleware.auth_token but still not X-Group-Names (in any trivial
form).


>
>
> Henry
>
>
>  From: Dolph Mathews  To: "OpenStack Development
> Mailing List (not for usage questions)" ,
> Henry Nash , Henry Nash/UK/IBM@IBMGB Date: 
> 05/06/2015
> 15:38 Subject: Re: [openstack-dev] [keystone][barbican] Regarding
> exposing X-Group- in token validation
>
> --
>
>
>
>
> On Thu, Jun 4, 2015 at 10:17 PM, John Wood <*john.w...@rackspace.com*
> > wrote:
> Hello folks,
>
> Regarding option C, if group IDs are unique within a given cloud/context,
> and these are discoverable by clients that can then set the ACL on a secret
> in Barbican, then that seems like a viable option to me. As it is now, the
> user information provided to the ACL is the user ID information as found in
> X-User-Ids now, not user names.
>
> To Kevin’s point though, are these group IDs unique across domains now, or
> in the future? If not the more complex tuples suggested could be used, but
> seem more error prone to configure on an ACL.
>
> Well, that's a good question, because that depends on the backend, and our
> backend architecture has recently gotten very complicated in this area.
>
> If groups are backed by SQL, then they're going to be globally unique
> UUIDs, so the answer is always yes.
>
> If they're backed by LDAP, then actually it depends on LDAP, but the
> answer should be yes.
>
> But the nightmare scenario we now support is domain-specific identity
> drivers, where each domain can actually be configured to talk to a
> different LDAP server. In that case, I don't think you can make any
> guarantees about group ID uniqueness :( Instead, each domain could provide
> whatever IDs it wants, and those might conflict with those of other
> domains. We have a workaround for a similar issue with user IDs, but it
> hasn't been applied to groups, leaving them quite broken in this scenario.
> I'd consider this to be an issue we need to solve in Keystone, though, not
> something other projects need to worry about. I'm hoping Henry Nash can
> chime in and correct me!
>
>
> Thanks,
> John
>
> *From: *, Kevin M <*kevin@pnnl.gov* >
> * Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" <*openstack-dev@lists.openstack.org*
> >
> * Date: *Thursday, June 4, 2015 at 6:01 PM
> * To: *"OpenStack Development Mailing List (not for usage questions)" <
> *openstack-dev@lists.openstack.org* >
>
> * Subject: *Re: [openstack-dev] [keystone][barbican] Regarding exposing
> X-Group- in token validation
>
> In Juno I tried adding a user in Domain A to group in Domain B. That
> currently is not supported. Would be very handy though.
>
> We're getting a ways from the original part of the thread, so I may have
> lost some context, but I think the original question was, if barbarian can
> add group names to their resource acls.
>
> Since two administrative domains can issue the same group name, its not
> safe I believe.
>
> Simply ensuring the group name is associated with a user and the domain
> for the user matches the domain for the group wouldn't work because someone
> with control of their own domain can just make a
> user and give them the group with the name they want and come take your
> credentials.
>
> What may be safe is for the barbican ACL to contain the group_id if they
> are uniqueue across all domains, or take a domain_id & group_name pair for
> the acl.
>
> Thanks,
> Kevin
>
> --
>
> *From:* Dolph Mathews [*dolph.math...@gmail.com* 
> ]
> * Sent:* Thursday, June 04, 2015 1:41 PM
> * To:* OpenStack Development Mailing List (not for usage questions)
> * Subject:* Re: [openstack-dev] [keystone][barbican] Regarding exposing
> X-Group- in token validation
>
> Problem! In writing a spec for this (
> *https://review.openstack.org/#/c/188564/*
>  ), I remembered that groups
> are domain-specific entities, which complicates the problem of providing
> X-Group-Names via middleware.
>
> The problem is that we can't simply expose X-Group-Names to underlying
> services without either A) making a well-documented assumption about the
> ONE owning domain scope of ALL included groups, B) passing significantly
> more data to underlying services than just a list of names (a domain scope
> for every group), C) passing only globally-unique group IDs (services would
> then have to retrieve additional details about each from from keystone if
> they so cared).
>
> Option A) More specif

Re: [openstack-dev] [keystone][reseller] New way to get a project scoped token by name

2015-06-05 Thread Henry Nash
I am sure I have missed something along the way, but can someone explain to me 
why we need this at all.  Project names are unique within a domain, with the 
exception of the project that is acting as its domain (i.e. they can only every 
be two names clashing in a hierarchy at the domain level and below).  So why 
isn’t specifying “is_domain=True/False” sufficient in an auth scope along with 
the project name?

Henry

> On 5 Jun 2015, at 18:02, Adam Young  wrote:
> 
> On 06/03/2015 05:05 PM, Morgan Fainberg wrote:
>> Hi David,
>> 
>> There needs to be some form of global hierarchy delimiter - well more to the 
>> point there should be a common one across OpenStack installations to ensure 
>> we are providing a good and consistent (and more to the point 
>> inter-operable) experience to our users. I'm worried a custom defined 
>> delimiter (even at the domain level) is going to make it difficult to 
>> consume this data outside of the context of OpenStack (there are 
>> applications that are written to use the APIs directly).
> We have one already.  We are working JSON, and so instead of project name 
> being a string, it can be an array.
> 
> Nothing else is backwards compatible.  Nothing else will ensure we don;t 
> break exisiting deployments.
> 
> Moving forward, we should support DNS notation, but it has to be an opt in
> 
>> 
>> The alternative is to explicitly list the delimiter in the project ( e.g. 
>> {"hierarchy": {"delim": ".", "domain.project.project2"}} ). The additional 
>> need to look up the delimiter / set the delimiter when creating a domain is 
>> likely to make for a worse user experience than selecting one that is not 
>> different across installations.
>> 
>> --Morgan
>> 
>> On Wed, Jun 3, 2015 at 12:19 PM, David Chadwick > > wrote:
>> 
>> 
>> On 03/06/2015 14:54, Henrique Truta wrote:
>> > Hi David,
>> >
>> > You mean creating some kind of "delimiter" attribute in the domain
>> > entity? That seems like a good idea, although it does not solve the
>> > problem Morgan's mentioned that is the global hierarchy delimiter.
>> 
>> There would be no global hierarchy delimiter. Each domain would define
>> its own and this would be carried in the JSON as a separate parameter so
>> that the recipient can tell how to parse hierarchical names
>> 
>> David
>> 
>> >
>> > Henrique
>> >
>> > Em qua, 3 de jun de 2015 às 04:21, David Chadwick
>> > mailto:d.w.chadw...@kent.ac.uk> 
>> > >> 
>> > escreveu:
>> >
>> >
>> >
>> > On 02/06/2015 23:34, Morgan Fainberg wrote:
>> > > Hi Henrique,
>> > >
>> > > I don't think we need to specifically call out that we want a
>> > domain, we
>> > > should always reference the namespace as we do today. Basically, if 
>> > we
>> > > ask for a project name we need to also provide it's namespace (your
>> > > option #1). This clearly lines up with how we handle projects in
>> > domains
>> > > today.
>> > >
>> > > I would, however, focus on how to represent the namespace in a single
>> > > (usable) string. We've been delaying the work on this for a while
>> > since
>> > > we have historically not provided a clear way to delimit the
>> > hierarchy.
>> > > If we solve the issue with "what is the delimiter" between domain,
>> > > project, and subdomain/subproject, we end up solving the usability
>> >
>> > why not allow the top level domain/project to define the delimiter for
>> > its tree, and to carry the delimiter in the JSON as a new parameter.
>> > That provides full flexibility for all languages and locales
>> >
>> > David
>> >
>> > > issues with proposal #1, and not breaking the current behavior you'd
>> > > expect with implementing option #2 (which at face value feels to
>> > be API
>> > > incompatible/break of current behavior).
>> > >
>> > > Cheers,
>> > > --Morgan
>> > >
>> > > On Tue, Jun 2, 2015 at 7:43 AM, Henrique Truta
>> > > mailto:henriquecostatr...@gmail.com>
>> > > > >
>> > > > 
>> > > > > > >
>> > > Hi folks,
>> > >
>> > >
>> > > In Reseller[1], we’ll have the domains concept merged into
>> > projects,
>> > > that means that we will have projects that will behave as 
>> > domains.
>> > > Therefore, it will be possible to have two projects with the same
>> > > name in a hierarchy, one being a domain and another being a
>> > regular
>> > > project. For instance, the following hierarchy will be valid:
>> > >
>> > > A - is_domain project, with domain A
>> > >
>> > > |
>> > >
>> > > B - proje

[openstack-dev] [rally][scaling up development] Rally core team re-organization

2015-06-05 Thread Boris Pavlovic
Hi stackers,

Seems likes after stackforge/rally -> openstack/rally Rally project started
being more attractive.
According recent stats we are on top 3 position (based on Patch sets stats)
http://stackalytics.com/?release=liberty&metric=patches&project_type=All
And if we compare half year ago we have 40 open reviews and now we have
about 140...
In other words we need to scale core reviewing process with keeping
quality.



I suggested in mailing thread:
[openstack-dev][all][infra][tc][ptl] Scaling up code review process (subdir
cores)
To create special rules & ACL groups to have fully automated system.

Instead of support I got raped by community.
Community was very polite & technical oriented in that thread and they
said:
1) I am bad PTL,
2) I don't know how to do open source
3) Rally project sux
4) Rally project community sux
5) Rally project has troubles
6) A lot of more constructive critics

So Instead of having NICE fully automated system for subcores we will use
ugly, not automated but very popular in community "trust" model based on
excel.




Solution:
We will have single core team that can merge anything.
But there will be two types of core (based on trust ;()

I created page in docs, that explains who is who:
https://review.openstack.org/#/c/188843/1

Core reviewer
--
That are core for whole project

Plugin Core reviewer

That will just review/merge their component plugins and nothing else


I hope by end of this cycle each component will have own subteam which will
resolve
most of reviewing process scale issues..


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][reseller] New way to get a project scoped token by name

2015-06-05 Thread Adam Young

On 06/03/2015 05:05 PM, Morgan Fainberg wrote:

Hi David,

There needs to be some form of global hierarchy delimiter - well more 
to the point there should be a common one across OpenStack 
installations to ensure we are providing a good and consistent (and 
more to the point inter-operable) experience to our users. I'm worried 
a custom defined delimiter (even at the domain level) is going to make 
it difficult to consume this data outside of the context of OpenStack 
(there are applications that are written to use the APIs directly).
We have one already.  We are working JSON, and so instead of project 
name being a string, it can be an array.


Nothing else is backwards compatible.  Nothing else will ensure we don;t 
break exisiting deployments.


Moving forward, we should support DNS notation, but it has to be an opt in



The alternative is to explicitly list the delimiter in the project ( 
e.g. {"hierarchy": {"delim": ".", "domain.project.project2"}} ). The 
additional need to look up the delimiter / set the delimiter when 
creating a domain is likely to make for a worse user experience than 
selecting one that is not different across installations.


--Morgan

On Wed, Jun 3, 2015 at 12:19 PM, David Chadwick 
mailto:d.w.chadw...@kent.ac.uk>> wrote:




On 03/06/2015 14:54, Henrique Truta wrote:
> Hi David,
>
> You mean creating some kind of "delimiter" attribute in the domain
> entity? That seems like a good idea, although it does not solve the
> problem Morgan's mentioned that is the global hierarchy delimiter.

There would be no global hierarchy delimiter. Each domain would define
its own and this would be carried in the JSON as a separate
parameter so
that the recipient can tell how to parse hierarchical names

David

>
> Henrique
>
> Em qua, 3 de jun de 2015 às 04:21, David Chadwick
> mailto:d.w.chadw...@kent.ac.uk>
>>
escreveu:
>
>
>
> On 02/06/2015 23:34, Morgan Fainberg wrote:
> > Hi Henrique,
> >
> > I don't think we need to specifically call out that we want a
> domain, we
> > should always reference the namespace as we do today.
Basically, if we
> > ask for a project name we need to also provide it's
namespace (your
> > option #1). This clearly lines up with how we handle
projects in
> domains
> > today.
> >
> > I would, however, focus on how to represent the namespace
in a single
> > (usable) string. We've been delaying the work on this for
a while
> since
> > we have historically not provided a clear way to delimit the
> hierarchy.
> > If we solve the issue with "what is the delimiter" between
domain,
> > project, and subdomain/subproject, we end up solving the
usability
>
> why not allow the top level domain/project to define the
delimiter for
> its tree, and to carry the delimiter in the JSON as a new
parameter.
> That provides full flexibility for all languages and locales
>
> David
>
> > issues with proposal #1, and not breaking the current
behavior you'd
> > expect with implementing option #2 (which at face value
feels to
> be API
> > incompatible/break of current behavior).
> >
> > Cheers,
> > --Morgan
> >
> > On Tue, Jun 2, 2015 at 7:43 AM, Henrique Truta
> > mailto:henriquecostatr...@gmail.com>
> >
> 
>  >
> > Hi folks,
> >
> >
> > In Reseller[1], we’ll have the domains concept merged into
> projects,
> > that means that we will have projects that will behave
as domains.
> > Therefore, it will be possible to have two projects
with the same
> > name in a hierarchy, one being a domain and another
being a
> regular
> > project. For instance, the following hierarchy will be
valid:
> >
> > A - is_domain project, with domain A
> >
> > |
> >
> > B - project
> >
> > |
> >
> > A - project with domain A
> >
> >
> > That hierarchy faces a problem when a user requests a
project
> scoped
> > token by name, once she’ll pass “domain = ‘A’” and
> project.name  
> >  = “A”. Currently, we have no way to
> > distinguish

Re: [openstack-dev] [keystone][barbican] Regarding exposing X-Group-xxxx in token validation

2015-06-05 Thread Henry Nash
So I think that GroupID's are actually unique and safesince in the multi 
LDAP case we provide an indirection already in Keystone and issue a "Public ID" 
(this is true for BOTH users and groups), that we map to the underlying local 
ID in the particular LDAP backend. 

Henry

> On 5 Jun 2015, at 15:37, Dolph Mathews  wrote:
> 
> 
> On Thu, Jun 4, 2015 at 10:17 PM, John Wood  > wrote:
> Hello folks,
> 
> Regarding option C, if group IDs are unique within a given cloud/context, and 
> these are discoverable by clients that can then set the ACL on a secret in 
> Barbican, then that seems like a viable option to me. As it is now, the user 
> information provided to the ACL is the user ID information as found in 
> X-User-Ids now, not user names. 
> 
> To Kevin’s point though, are these group IDs unique across domains now, or in 
> the future? If not the more complex tuples suggested could be used, but seem 
> more error prone to configure on an ACL.
> 
> Well, that's a good question, because that depends on the backend, and our 
> backend architecture has recently gotten very complicated in this area.
> 
> If groups are backed by SQL, then they're going to be globally unique UUIDs, 
> so the answer is always yes.
> 
> If they're backed by LDAP, then actually it depends on LDAP, but the answer 
> should be yes.
> 
> But the nightmare scenario we now support is domain-specific identity 
> drivers, where each domain can actually be configured to talk to a different 
> LDAP server. In that case, I don't think you can make any guarantees about 
> group ID uniqueness :( Instead, each domain could provide whatever IDs it 
> wants, and those might conflict with those of other domains. We have a 
> workaround for a similar issue with user IDs, but it hasn't been applied to 
> groups, leaving them quite broken in this scenario. I'd consider this to be 
> an issue we need to solve in Keystone, though, not something other projects 
> need to worry about. I'm hoping Henry Nash can chime in and correct me!
>  
> 
> Thanks,
> John
> 
> From: , Kevin M mailto:kevin@pnnl.gov>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Date: Thursday, June 4, 2015 at 6:01 PM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> 
> Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
> X-Group- in token validation
> 
> In Juno I tried adding a user in Domain A to group in Domain B. That 
> currently is not supported. Would be very handy though.
> 
> We're getting a ways from the original part of the thread, so I may have lost 
> some context, but I think the original question was, if barbarian can add 
> group names to their resource acls.
> 
> Since two administrative domains can issue the same group name, its not safe 
> I believe.
> 
> Simply ensuring the group name is associated with a user and the domain for 
> the user matches the domain for the group wouldn't work because someone with 
> control of their own domain can just make a 
> user and give them the group with the name they want and come take your 
> credentials.
> 
> What may be safe is for the barbican ACL to contain the group_id if they are 
> uniqueue across all domains, or take a domain_id & group_name pair for the 
> acl.
> 
> Thanks,
> Kevin
> 
> From: Dolph Mathews [dolph.math...@gmail.com ]
> Sent: Thursday, June 04, 2015 1:41 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
> X-Group- in token validation
> 
> Problem! In writing a spec for this ( 
> https://review.openstack.org/#/c/188564/ 
>  ), I remembered that groups are 
> domain-specific entities, which complicates the problem of providing 
> X-Group-Names via middleware.
> 
> The problem is that we can't simply expose X-Group-Names to underlying 
> services without either A) making a well-documented assumption about the ONE 
> owning domain scope of ALL included groups, B) passing significantly more 
> data to underlying services than just a list of names (a domain scope for 
> every group), C) passing only globally-unique group IDs (services would then 
> have to retrieve additional details about each from from keystone if they so 
> cared).
> 
> Option A) More specifically, keystone could opt to enumerate the groups that 
> belong to the same domain as the user. In this case, it'd probably make more 
> sense from an API perspective if the "groups" enumeration were part of the 
> "user" resources in the token response body (the "user" object already has a 
> containing domain ID. That means that IF a user were to be assigned a group 
> membership in another domain (assuming we didn't move to disallowing that 
> behavior at some point), then it 

Re: [openstack-dev] [puppet] Change abandonment policy

2015-06-05 Thread Boris Pavlovic
Hi,

+1 for #1 and if patch is not touched for N weeks just finish it using
current active team.

Best regards,
Boris Pavlovic

On Fri, Jun 5, 2015 at 7:27 PM, Richard Raseley  wrote:

> Colleen Murphy wrote:
>
>> 3) Manually abandon after N months/weeks changes that have a -1 that was
>> never responded to
>>
>> ```
>> If a change is submitted and given a -1, and subsequently the author
>> becomes unresponsive for a few weeks, reviewers should leave reminder
>> comments on the review or attempt to contact the original author via IRC
>> or email. If the change is easy to fix, anyone should feel welcome to
>> check out the change and resubmit it using the same change ID to
>> preserve original authorship. If the author is unresponsive for at least
>> 3 months and no one else takes over the patch, core reviewers can
>> abandon the patch, leaving a detailed note about how the change can be
>> restored.
>>
>> If a change is submitted and given a -2, or it otherwise becomes clear
>> that the change can not make it in (for example, if an alternate change
>> was chosen to solve the problem), and the author has been unresponsive
>> for at least 3 months, a core reviewer should abandon the change.
>> ```
>>
>
> +1 for #3
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Change abandonment policy

2015-06-05 Thread Richard Raseley

Colleen Murphy wrote:

3) Manually abandon after N months/weeks changes that have a -1 that was
never responded to

```
If a change is submitted and given a -1, and subsequently the author
becomes unresponsive for a few weeks, reviewers should leave reminder
comments on the review or attempt to contact the original author via IRC
or email. If the change is easy to fix, anyone should feel welcome to
check out the change and resubmit it using the same change ID to
preserve original authorship. If the author is unresponsive for at least
3 months and no one else takes over the patch, core reviewers can
abandon the patch, leaving a detailed note about how the change can be
restored.

If a change is submitted and given a -2, or it otherwise becomes clear
that the change can not make it in (for example, if an alternate change
was chosen to solve the problem), and the author has been unresponsive
for at least 3 months, a core reviewer should abandon the change.
```


+1 for #3

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Architecture Diagrams in ascii art?

2015-06-05 Thread Joshua Harlow

Markus Zoeller wrote:

Joe Gordon  wrote on 05/16/2015 03:33:35 AM:

After further investigation in blockdiag, is useless for moderately
complex diagrams.

Here is my attempt at graphing nova [0], but due to a blockdiag bug
from 2013, [1] it is impossible to clearly read. For example, in the
diagram there is not supposed to be any arrow between the conductor
and cinder/glance/neutron. I looked into dia, and while it has plenty
of diagram shapes it doesn't have a good template for software
architecture, but maybe there is a way to make dia work. And that just
  leaves SVG graphics,  after spending an hour or two  playing around
with Inkscape and it looks promising (although the learning curve is
pretty steep). Here is my first attempt in Inkscape [2].

[0] http://interactive.blockdiag.com/?


compression=deflate&src=eJx9UMtOAzEMvOcrrL0vPwCtVHYryoG2EvSEOHiTtI0axavEFQK0_47dB1oOkEuSmbE9ni6SPbiAO_gyAJviM7yWPfYeJlChZcrV2-2VqafQxOAT62u2fhwTC8rhk9KIkWOMfuBOC0NyPtdLf-

RMqX6ImKwXWbN6Wm9e5v9ppNcu07EXi_puVsv2LL-U6jAd8wsSTByJV-QgtibQU-
aMgcft4G-


RcBE7HzWH9h7QWl9KpaMKf0SNxxGzdyfkElgMSVcCS5GyFnYR7aESxCFjh8WPwt1Gerd7zHxzJc9J_2wiW8r93Czm7cnOYAZjhm9d4H0M

[1] https://bitbucket.org/blockdiag/blockdiag/issue/45/arrows-collisions
[2] https://i.imgur.com/TXwsRoB.png


Thanks,
John


Maybe the "graphviz" extension for Sphinx could be usefull [1].
It's better in displaying edges/dependencies [2].


+1

We use pydot2 (which uses graphviz) in ironic and taskflow (and maybe 
elsewhere?) already and it seems to have worked out just fine, so if 
people find this useful that's great to.


- http://docs.openstack.org/developer/ironic/dev/states.html
- http://docs.openstack.org/developer/taskflow/states.html

Code that generates these:

- https://github.com/openstack/ironic/blob/master/tools/states_to_dot.py
- https://github.com/openstack/taskflow/blob/master/tools/state_graph.py



[1] http://sphinx-doc.org/ext/graphviz.html
[2] http://graphviz.org/content/world

Regards,
Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Prioritize tests over JSCS

2015-06-05 Thread Aaron D Sahlin
Thai -

I thought there was a conscious decision to separate addressing the JSCS 
issues and applying JP's guidelines.Smaller more precise patches are 
easier to land!
I plan on applying JP's guidelines in a follow on patch.Especially 
since one of my JSCS patches merged this morning.


Aaron D. Sahlin
IBMUSM07(asahlin)
Dept. X2WA
Phone 507-253-7349 Tie 553-7349




From:   Thai Q Tran/Silicon Valley/IBM@IBMUS
To: "OpenStack Development Mailing List \(not for usage questions\)" 

Date:   06/04/2015 01:33 PM
Subject:[openstack-dev] [horizon] Prioritize tests over JSCS



Hi folks,

I know a lot of people are tackling the JSCS stuff, and thats really 
great. But it would be extra nice to see JSCS stuff along with JP's 
guidelines in your patches. Furthermore, if the file you are working on 
doesn't have an accompanying spec file, please make sure that the tests 
for it exists. If it is not there, please prioritize and spend some time 
reviewing patches with the tests you need, or create a spec file and get 
that merge first.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-05 Thread Joshua Harlow

Daniel P. Berrange wrote:

On Fri, Jun 05, 2015 at 02:46:07PM +0200, Thierry Carrez wrote:

So.. summarizing the various options again:

Plan A
Just drop stable point releases.
(-) No more release notes
(-) Lack of reference points to compare installations

Plan B
Push date-based tags across supported projects from time to time.
(-) Encourages to continue using same version across the board
(-) Almost as much work as making proper releases

Plan C
Let projects randomly tag point releases whenever
(-) Still a bit costly in terms of herding cats

Plan D
Drop stable point releases, publish per-commit tarballs
(-) Requires some infra changes, takes some storage space

Plans B, C and D also require some release note / changelog generation
from data maintained *within* the repository.

Personally I think the objections raised against plan A are valid. I
like plan D, since it's more like releasing every commit than "not
releasing anymore". I think it's the most honest trade-off. I could go
with plan C, but I think it's added work for no additional value to the
user.


I don't see a whole lot of difference between plan A and D.
Publishing per-commit tarballs is merely saving the downstream
users the need to run a 'git archive' command, and providing
some auto-generated changelog that's already available from
'git log'.

If the downsteam consumer has their own extra patches ontop of the
stable branch, then it seems D is even less useful than A.


+1 to this, and I'm 99% sure every big/medium/all(?) cloud has extra 
patches on-top of stable branches; so publishing a tarball, meh, it will 
have to be regenerated anyway (with said patches) from the git commit it 
came from...




Regards,
Daniel


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: Return request-id to caller

2015-06-05 Thread Miguel Angel Ajo


On Wednesday 3 June 2015 at 12:13, Miguel Ángel Ajo wrote:

> Doesn’t this overlap with the work done for the OSProfiler ?  
>  
>  
> More comments inline.  
>  
> Miguel Ángel Ajo
>  
>  
> On Wednesday, 3 de June de 2015 at 11:43, Kekane, Abhishek wrote:
>  
> > Hi Devs,
> >  
> > So for I have got following responses on the proposed solutions:
> >  
> > Solution 1: Return tuple containing headers and body from - 3 +1
> > Solution 2: Use thread local storage to store 'x-openstack-request-id' 
> > returned from headers - 0 +1
> > Solution 3: Unique request-id across OpenStack Services - 1 +1
> >  
> >  
> >  
>  
>  
> I’d vote for Solution 3, without involving keystone (first caller with no 
> req-id generates one randomly),
> the req-id contains a call/hop count, which is incremented on every new 
> call...
>  
>  


sorry, that suggestion is naive, simply incrementing the call count won’t work 
as calls diverge we will have duplicate req-id + hop’s  

  
>   
> >   
> >  
> >  
> >  
> >  
> > Requesting community people, cross-project members and PTL's to go through 
> > this mailing thread [1] and give your suggestions/opinions about the 
> > solutions proposed so that It will be easy to finalize the solution.
> >  
> > [1] http://lists.openstack.org/pipermail/openstack-dev/2015-May/064842.html
> >  
> > Thanks & Regards,
> >  
> > Abhishek Kekane
> >  
> > -Original Message-
> > From: Nikhil Komawar [mailto:nik.koma...@gmail.com]  
> > Sent: 28 May 2015 12:34
> > To: openstack-dev@lists.openstack.org 
> > (mailto:openstack-dev@lists.openstack.org)
> > Subject: Re: [openstack-dev] [all] cross project communication: Return 
> > request-id to caller
> >  
> > Did you get to talk with anyone in the LogWG ( 
> > https://wiki.openstack.org/wiki/LogWorkingGroup )? In wonder what kind of 
> > recommendations, standards we can come up with while adopting a cross 
> > project solution. If our logs follow certain prefix and or suffix style 
> > across projects, that would help a long way.
> >  
> > Personally: +1 on Solution 1
> >  
> > On 5/28/15 2:14 AM, Kekane, Abhishek wrote:
> > >  
> > > Hi Devs,
> > >  
> > >  
> > > Thank you for your opinions/thoughts.
> > >  
> > > However I would like to suggest that please give +1 against the  
> > > solution which you will like to propose so that at the end it will be  
> > > helpful for us to consolidate the voting against each solution and  
> > > make some decision.
> > >  
> > >  
> > > Thanks in advance.
> > >  
> > >  
> > > Abhishek Kekane
> > >  
> > >  
> > >  
> > > *From:*Joe Gordon [mailto:joe.gord...@gmail.com]
> > > *Sent:* 28 May 2015 00:31
> > > *To:* OpenStack Development Mailing List (not for usage questions)
> > > *Subject:* Re: [openstack-dev] [all] cross project communication:
> > > Return request-id to caller
> > >  
> > >  
> > >  
> > >  
> > > On Wed, May 27, 2015 at 12:06 AM, Kekane, Abhishek  
> > > mailto:abhishek.kek...@nttdata.com) 
> > > > wrote:
> > >  
> > > Hi Devs,
> > >  
> > >  
> > > Each OpenStack service sends a request ID header with HTTP responses.
> > > This request ID can be useful for tracking down problems in the logs.
> > > However, when operation crosses service boundaries, this tracking can  
> > > become difficult, as each service has its own request ID. Request ID  
> > > is not returned to the caller, so it is not easy to track the request.
> > > This becomes especially problematic when requests are coming in  
> > > parallel. For example, glance will call cinder for creating image, but  
> > > that cinder instance may be handling several other requests at the  
> > > same time. By using same request ID in the log, user can easily find  
> > > the cinder request ID that is same as glance request ID in the g-api  
> > > log. It will help operators/developers to analyse logs effectively.
> > >  
> > >  
> > > Thank you for writing this up.
> > >  
> > >  
> > >  
> > > To address this issue we have come up with following solutions:
> > >  
> > >  
> > > Solution 1: Return tuple containing headers and body from
> > > respective clients (also favoured by Joe Gordon)
> > >  
> > > Reference:
> > >  
> > > https://review.openstack.org/#/c/156508/6/specs/log-request-id-mapping
> > > s.rst
> > >  
> > >  
> > > Pros:
> > >  
> > > 1. Maintains backward compatibility
> > >  
> > > 2. Effective debugging/analysing of the problem as both calling
> > > service request-id and called service request-id are logged in
> > > same log message
> > >  
> > > 3. Build a full call graph
> > >  
> > > 4. End user will able to know the request-id of the request and
> > > can approach service provider to know the cause of failure of
> > > particular request.
> > >  
> > >  
> > > Cons:
> > >  
> > > 1. The changes need to be done first in cross-projects before
> > > making changes in clients
> > >  
> > > 2. Applications which are using python-*clients needs to do
> > > required changes (che

[openstack-dev] [ptl] Skipping Release management 0800-1000 UTC office hours next Tuesday

2015-06-05 Thread Thierry Carrez
Hi PTLs and release liaisons,

Due to being on a plane without wifi, I'll be missing my 0800-1000 UTC
release management office hours on Tuesday, June 9.

We'll still have office hours between 1800 and 2000 UTC on that day, in
case you have anything to discuss or questions to ask.

If you can't make those instead, you can still drop questions on the
#openstack-relmgr-office channel and we'll asynchronously answer them.

More details on:
https://wiki.openstack.org/wiki/Release_Cycle_Management

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Domain and Project naming

2015-06-05 Thread David Chadwick
Hi Jamie

I think if we are going for hierarchical names we should do it properly
in one go ie. have a recursive scheme that allows infinite nesting of
name components, and then it will solve all current and future problems.
Having a half baked scheme which only allows one level of nesting, or
requires globally unique name components is only storing up trouble for
the future

regards

David


On 05/06/2015 04:13, Jamie Lennox wrote:
> 
> 
> - Original Message -
>> From: "Adam Young" 
>> To: "OpenStack Development Mailing List" 
>> Sent: Thursday, 4 June, 2015 2:25:52 PM
>> Subject: [openstack-dev] [Keystone] Domain and Project naming
>>
>> With Hierarchical Multitenantcy, we have the issue that a project is
>> currentl restricted in its naming further than it should be.  The domain
>> entity enforces that all project namess under the domain domain be
>> unique, but really what we should say is that all projects under a
>> single parent project be unique.  However, we have, at present, an API
>> which allows a user to specify the domain either name or id and project
>> again, either by name or ID, but here we care only about the name.  This
>> can be used either in specifying the token, or in operations ion the
>> project API.
>>
>> We should change projec naming to be nestable, and since we don't have a
>> delimiter set, we should expect the names to be an array, where today we
>> might have:
>>
>>  "project": {
>>  "domain": {
>>  "id": "1789d1",
>>  "name": "example.com"
>>  },
>>  "id": "263fd9",
>>  "name": "project-x"
>>  }
>>
>> we should allow and expect:
>>
>>  "project": {
>>  "domain": {
>>  "id": "1789d1",
>>  "name": "example.com"
>>  },
>>  "id": "263fd9",
>>  "name": [ "grandpa", "dad", "daughter"]
>>  }
>>
>> This will, of course, break Horizon and lots of other things, which
>> means we need a reasonable way to display these paths.  The typical UI
>> approach is a breadcrumb trail, and I think something where we put the
>> segments of the path in the UI, each clickable, should be
>> understandable: I'll defer to the UX experts if this is reasonable or not.
>>
>> The alternative is that we attempt to parse the project names. Since we
>> have not reserved a delimeter, we will break someone somewhere if we
>> force one on people.
>>
>>
>> As an alternative, we should start looking in to following DNS standards
>> for naming projects and hosts.  While a domain should not be required to
>> be a DNS registred domain name, we should allow for the case where a
>> user wants that to be the case, and to synchronize nam,ing across
>> multiple clouds.  In order to enforce this, we would have to have an
>> indicator on a domain name that it has been checked with DNS;  ideally,
>> the user would add a special SRV or Text record or something that
>> Keystone could use to confirm that the user has oked this domain name
>> being used by this cloud...or something perhaps with DNSSEC, checking
>> that auser has permission to assign a specific domain name to a set of
>> resources in the cloud.  If we do that, the projects under that domain
>> should also be valid DNS subzones, and the hosts either  FQDNs or some
>> alternate record...this would tie in Well with Designate.
>>
>> Note that I am not saying "force this"  but rather "allow this" as it
>> will simplify the naming when bursting from cloud to cloud:  the Domain
>> and project names would then be synchronized via DNS regardless of
>> hosting provider.
>>
>> As an added benefit, we could provide a SRV or TEXT record (or some new
>> URL type..I heard one is coming) that describes where to find the home
>> Keystone server for a specified domain...it would work nicely with the
>> K2K strategy.
>>
>> If we go with DNS project naming, we can leave all project names in a
>> flat string.
>>
>>
>> Note that the DNS approach can work even if the user does not wish to
>> register their own DNS.  A hosting provider (I'll pick dreamhost, cuz  I
>> know they are listening)  could say the each of their tenants picks a
>> user name...say that mine i admiyo,  they would then create a subdomain
>> of admiyo.dreamcompute.dreamhost.com.  All of my subprojects would then
>> get additional zones under that.  If I were then to burst from there to
>> Bluebox, the Keystone domain name would be the one that I was assigned
>> back at Dreamhost.
> 
> Back up. Is our current restrictions a problem?
> 
> Even with hierarchical projects is it a problem to say that a project name 
> still must be unique per domain? I get that in theory you might want to be 
> able to identify a nested project by name under other projects but that's not 
> something we have to allow immediately.
> 
> I haven't followed the reseller case closely but in any situation where you 
> had off control like that w

Re: [openstack-dev] [keystone] [nova] [oslo] [cross-project] Dynamic Policy

2015-06-05 Thread Adam Young

On 06/03/2015 01:43 PM, Sean Dague wrote:

On 06/03/2015 10:10 AM, Adam Young wrote:

I gave a presentation on Dynamic Policy for Access Control at the Summit.

https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/dynamic-policy-for-access-control


My slides are here:
http://adam.younglogic.com/presentations/dynamic_policy.pp.pdf


My original blog post attempted to lay out the direction:

http://adam.younglogic.com/2014/11/dynamic-policy-in-keystone/

And the Overview spec is here:
https://review.openstack.org/#/c/147651/


This references multiple smaller specs:

A unified policy file:
https://review.openstack.org/134656

The unified policy file, as an actual single file is part of this
process which I'm concerned isn't workable unless all OpenStack
components are upgraded lock step, which is actually a situation we want
to do less of, not more of.
Right.  What we really need is a set of common rules that the projects 
all agree on as the start of project specific policy.


A unified policy file is not maintainable long term if there are going 
to be a huge number of microversion changes.
We'll have strange dependencies where we need to get a change into the 
unified file first, but then the code that goes

in to Nova gets modified enough so that the policy is no longer valid.

I still think the unified policy file process is essential to working 
out the differences between the projects.


Perhaps a better starting point is a common policy header file, assumed 
to be applied prior to the default policy from each of the projects?




Assume that Keystone git tree owns that file. Nova adds an API via
microversions for an intermediate milestone that adds new policy in.
Deployers CD this version out, leaving Keystone at the previous release
version. Now Nova has code out there that requires policy which doesn't
exist. The policy at some level is really linked to the code.

In a world of microversions this is now a lot more like database schema,
because big bang API changes are a thing of the past (at least on the
Nova side). (Note: I'm working up some more general explanation of that
whole model shortly, part of our comms plan out of summit).


So...I think Microversions are the heart of what we need to address.  It 
took me a while, and too much for an email, to think through it, so I 
wrote it up here:


http://adam.younglogic.com/2015/06/dyn-policy-microversions/

So long as the unified policy file is turned back into a Nova managed 
policy file, I think we meet your concerns.





-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][release] updating client requirements

2015-06-05 Thread Doug Hellmann
We have a bunch of client libraries with out-dated requirements to
varying degrees [1], and some of them are causing dependency conflicts
in gate jobs. It would be good if project teams could prioritize
those reviews so we can release updates early next week.

Thanks,
Doug

[1] 
https://review.openstack.org/#/q/owner:%22OpenStack+Proposal+Bot%22+status:open+branch:master+project:%255Eopenstack/python-.*client,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Standard way to indicate a partial blueprint implementation?

2015-06-05 Thread Alexis Lee
Chris Friesen said on Thu, Jun 04, 2015 at 10:57:54PM -0600:
> On 06/04/2015 05:23 PM, Zane Bitter wrote:
> > I have personally been using:
> >
> >   Implements: partial-blueprint x
> >
> >but I don't actually care much. I would also be fine with:
> >
> >   Partially-Implements: blueprint x
> 
> If we need one, second one gets my vote.
> 
> I'm not sure we actually need one though.  Do we really care about
> the distinction between a commit that fully implements a blueprint
> and one that partially implements it?  I'd expect that most
> blueprints require multiple commits, and so "blueprint X" could
> implicitly be assumed to be a partial implementation in the common
> case.

We need something that looks like a tag to prevent hits on "blueprint
blah suggested this but I'm doing something else".

Call me crazy but this seems the obvious:

Blueprint: xyzzy


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][barbican] Regarding exposing X-Group-xxxx in token validation

2015-06-05 Thread Fox, Kevin M
I seem yo remember there being a mapping driver of some kind in juno+ that when 
enabled doesnt just use the ldap unique identifier raw. Its optional though. I 
also dont know if it doublechecks for uniqueness or just hashes.

Thanks,
Kevin


From: Dolph Mathews
Sent: Friday, June 05, 2015 7:37:54 AM
To: OpenStack Development Mailing List (not for usage questions); Henry Nash; 
Henry Nash
Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
X-Group- in token validation


On Thu, Jun 4, 2015 at 10:17 PM, John Wood 
mailto:john.w...@rackspace.com>> wrote:
Hello folks,

Regarding option C, if group IDs are unique within a given cloud/context, and 
these are discoverable by clients that can then set the ACL on a secret in 
Barbican, then that seems like a viable option to me. As it is now, the user 
information provided to the ACL is the user ID information as found in 
X-User-Ids now, not user names.

To Kevin’s point though, are these group IDs unique across domains now, or in 
the future? If not the more complex tuples suggested could be used, but seem 
more error prone to configure on an ACL.

Well, that's a good question, because that depends on the backend, and our 
backend architecture has recently gotten very complicated in this area.

If groups are backed by SQL, then they're going to be globally unique UUIDs, so 
the answer is always yes.

If they're backed by LDAP, then actually it depends on LDAP, but the answer 
should be yes.

But the nightmare scenario we now support is domain-specific identity drivers, 
where each domain can actually be configured to talk to a different LDAP 
server. In that case, I don't think you can make any guarantees about group ID 
uniqueness :( Instead, each domain could provide whatever IDs it wants, and 
those might conflict with those of other domains. We have a workaround for a 
similar issue with user IDs, but it hasn't been applied to groups, leaving them 
quite broken in this scenario. I'd consider this to be an issue we need to 
solve in Keystone, though, not something other projects need to worry about. 
I'm hoping Henry Nash can chime in and correct me!


Thanks,
John

From: , Kevin M mailto:kevin@pnnl.gov>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, June 4, 2015 at 6:01 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>

Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
X-Group- in token validation

In Juno I tried adding a user in Domain A to group in Domain B. That currently 
is not supported. Would be very handy though.

We're getting a ways from the original part of the thread, so I may have lost 
some context, but I think the original question was, if barbarian can add group 
names to their resource acls.

Since two administrative domains can issue the same group name, its not safe I 
believe.

Simply ensuring the group name is associated with a user and the domain for the 
user matches the domain for the group wouldn't work because someone with 
control of their own domain can just make a
user and give them the group with the name they want and come take your 
credentials.

What may be safe is for the barbican ACL to contain the group_id if they are 
uniqueue across all domains, or take a domain_id & group_name pair for the acl.

Thanks,
Kevin


From: Dolph Mathews [dolph.math...@gmail.com]
Sent: Thursday, June 04, 2015 1:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
X-Group- in token validation

Problem! In writing a spec for this ( https://review.openstack.org/#/c/188564/ 
), I remembered that groups are domain-specific entities, which complicates the 
problem of providing X-Group-Names via middleware.

The problem is that we can't simply expose X-Group-Names to underlying services 
without either A) making a well-documented assumption about the ONE owning 
domain scope of ALL included groups, B) passing significantly more data to 
underlying services than just a list of names (a domain scope for every group), 
C) passing only globally-unique group IDs (services would then have to retrieve 
additional details about each from from keystone if they so cared).

Option A) More specifically, keystone could opt to enumerate the groups that 
belong to the same domain as the user. In this case, it'd probably make more 
sense from an API perspective if the "groups" enumeration were part of the 
"user" resources in the token response body (the "user" object already has a 
containing domain ID. That means that IF a user were to be assigned a group 
membership in another domain (assuming we didn't move to disallowing that 
behavior at some point), then it would have to be ex

Re: [openstack-dev] The Nova API in Kilo and Beyond

2015-06-05 Thread David Kranz

On 06/05/2015 07:32 AM, Sean Dague wrote:

One of the things we realized at the summit was that we'd been working
through a better future for the Nova API for the past 5 cycles, gotten
somewhere quite useful, but had really done a poor job on communicating
what was going on and why, and where things are headed next.

I've written a bunch of English to explain it (which should be on the
planet shortly as well) -
https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/ (with
lots of help from Ed Leaf, John Garbutt, and Matt Gillard on content and
copy editing).

Yes, this is one of those terrible mailing list posts that points people
to read a thing not on the list (I appologize). But at 2700 words, I
think you'll find it more comfortable to read not in email.

Discussion is welcome here for any comments folks have. Some details
were trimmed for the sake of it not being a 6000 word essay, and to make
it accessible to people that don't have a ton of Nova internals
knowledge. We'll do our best to field questions, all of which will be
integrated into the eventual dev ref version of this.

Thanks for your time,

-Sean

Thanks, Sean. Great writeup. There are two issues I think might need 
more clarification/amplification:


1. Does the microversion methodology, and the motivation for true 
interoperability, imply that there needs to be a new version for every 
bug fix that could be detected by users of an api? There was back and 
forth about that in the review about the ip6 server list filter bug you 
referenced. If so, this is a pretty strong constraint that will need 
more guidance for reviewers about which kinds of changes need new 
versions and which don't.


2. What is the policy for making incompatible changes, now that 
versioning "allows" such changes to be made? If some one doesn't like 
the name of one of the keys in a returned dict, and submits a change 
with new microversion, how should that be evaluated? IIRC, this was an 
issue that inspired some dislike about the original v3 work.


 -David

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Nova API in Kilo and Beyond

2015-06-05 Thread Neil Jerram

On 05/06/15 12:32, Sean Dague wrote:

https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/


This is really informative and useful, thanks.

A few comments / questions, with bits of your text in quotes:

"Even figuring out what a cloud could do was pretty terrible. You could 
approximate it by listing the extensions of the API, then having a bunch 
of logic in your code to realize which extensions turned on or off 
certain features, or added new data to payloads."


I guess that's why the GNU autoconf/configure system has always advised 
testing for particular wanted features, instead of looking at versions 
and then relying on carnal knowledge to know what those versions imply. 
 Is that feature-testing-based approach impractical for OpenStack?


"Then she runs her code at against another cloud, which runs a version 
of Nova that predates this change. She's now effectively gone back in 
time. Her code now returns thousands of records instead of 1, and she's 
terribly confused why. She also has no way to figure out if random cloud 
Z is going to support this feature or not. So the only safe thing to do 
is implement the filtering client side instead, which means the server 
side filtering actually gained her very little. It's not something she 
can ever determine will work ahead of time. It's an API that is 
untrustworthy, so it's something that's best avoided."


Except that she still has to do all this anyway - i.e. write the 
client-side filtering, and figure out when to use it instead of 
server-side - even if there was an API version change accompanying the 
filtering feature.  Doesn't she?


The difference is just between making the switch based on a version 
number, and making it based on detected feature support.


"If you want features in the 2.3 microversion, ..."

I especially appreciate this part, as I've been seeing all the chat 
about microversions go past, and not really understanding it.


FWIW, though - and maybe this is just me - when I hear "microversion", 
I'm thinking of the "Z" in an "X.Y.Z" version number.  (With X = major 
and Y = minor.)  So it's counterintuitive for me that "2.3" is a 
microversion; it just sounds like a perfectly normal major/minor version 
number.  Are 2.0 and 2.1 microversions too?


But this is just bikeshedding really, so feel free to ignore...

"without building a giant autoconf-like system"

Aha, so you probably did consider that option, then. :-)

Many thanks,
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tricircle] Polling for weekly team meeting

2015-06-05 Thread Zhipeng Huang
Hi All,

The Tricircle Project has been on stackforge for a while, and without much
activities.

Now we will completely restructure the code base to make it more community
open source friendly, less corporate PoC looking hopefully :P

At the mean time I want to call for attention for people who are interested
in this project, to participate in a time poll for our weekly meeting:

http://doodle.com/d7fvmgvrwv8y3bqv

I would recommend UTC 13:00 coz it is the only few time period when all the
continent are able to be awake (tougher on US tho).

Please find more info on Tricircle at
https://github.com/stackforge/tricircle (new code base would come in the
next few weeks). It mainly aim to solve OpenStack deployment acorss
multiple sites.

Also depending on OPNFV Multisite Project's decision, Tricircle might be
one of the upstream projects of Multisite, which aims at developing
requirements for NFV multi-NFVI-PoPs VIM deployment. More info :
https://wiki.opnfv.org/multisite https://www.opnfv.org/arno
-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-05 Thread Matthew Thode
On 06/05/2015 09:34 AM, Alan Pevec wrote:
>> If the downsteam consumer has their own extra patches ontop of the
>> stable branch, then it seems D is even less useful than A.
> 
> It is not - downstream (speaking for RDO) I would keep Version:
> 2015.1.N where N is stable patch# so that we have a common reference
> point with other distros.
> Our patches, if any, then come on top this, so it's clear which
> patches are added by distro.
> 
> Cheers,
> Alan
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Yes, we use a revbump strategy here for patches and changing
2015.0.0-r10 or something.  I think this is the standard way most
distros signify changes from the base package.

-- 
Matthew Thode



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][barbican] Regarding exposing X-Group-xxxx in token validation

2015-06-05 Thread Dolph Mathews
On Thu, Jun 4, 2015 at 10:17 PM, John Wood  wrote:

>  Hello folks,
>
>  Regarding option C, if group IDs are unique within a given
> cloud/context, and these are discoverable by clients that can then set the
> ACL on a secret in Barbican, then that seems like a viable option to me. As
> it is now, the user information provided to the ACL is the user ID
> information as found in X-User-Ids now, not user names.
>
>  To Kevin’s point though, are these group IDs unique across domains now,
> or in the future? If not the more complex tuples suggested could be used,
> but seem more error prone to configure on an ACL.
>

Well, that's a good question, because that depends on the backend, and our
backend architecture has recently gotten very complicated in this area.

If groups are backed by SQL, then they're going to be globally unique
UUIDs, so the answer is always yes.

If they're backed by LDAP, then actually it depends on LDAP, but the answer
should be yes.

But the nightmare scenario we now support is domain-specific identity
drivers, where each domain can actually be configured to talk to a
different LDAP server. In that case, I don't think you can make any
guarantees about group ID uniqueness :( Instead, each domain could provide
whatever IDs it wants, and those might conflict with those of other
domains. We have a workaround for a similar issue with user IDs, but it
hasn't been applied to groups, leaving them quite broken in this scenario.
I'd consider this to be an issue we need to solve in Keystone, though, not
something other projects need to worry about. I'm hoping Henry Nash can
chime in and correct me!


>
>  Thanks,
> John
>
>   From: , Kevin M 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Thursday, June 4, 2015 at 6:01 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
>
> Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing
> X-Group- in token validation
>
>   In Juno I tried adding a user in Domain A to group in Domain B. That
> currently is not supported. Would be very handy though.
>
> We're getting a ways from the original part of the thread, so I may have
> lost some context, but I think the original question was, if barbarian can
> add group names to their resource acls.
>
> Since two administrative domains can issue the same group name, its not
> safe I believe.
>
> Simply ensuring the group name is associated with a user and the domain
> for the user matches the domain for the group wouldn't work because someone
> with control of their own domain can just make a
> user and give them the group with the name they want and come take your
> credentials.
>
> What may be safe is for the barbican ACL to contain the group_id if they
> are uniqueue across all domains, or take a domain_id & group_name pair for
> the acl.
>
> Thanks,
> Kevin
>
>  --
> *From:* Dolph Mathews [dolph.math...@gmail.com]
> *Sent:* Thursday, June 04, 2015 1:41 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [keystone][barbican] Regarding exposing
> X-Group- in token validation
>
>   Problem! In writing a spec for this (
> https://review.openstack.org/#/c/188564/ ), I remembered that groups are
> domain-specific entities, which complicates the problem of providing
> X-Group-Names via middleware.
>
>  The problem is that we can't simply expose X-Group-Names to underlying
> services without either A) making a well-documented assumption about the
> ONE owning domain scope of ALL included groups, B) passing significantly
> more data to underlying services than just a list of names (a domain scope
> for every group), C) passing only globally-unique group IDs (services would
> then have to retrieve additional details about each from from keystone if
> they so cared).
>
>  Option A) More specifically, keystone could opt to enumerate the groups
> that belong to the same domain as the user. In this case, it'd probably
> make more sense from an API perspective if the "groups" enumeration were
> part of the "user" resources in the token response body (the "user" object
> already has a containing domain ID. That means that IF a user were to be
> assigned a group membership in another domain (assuming we didn't move to
> disallowing that behavior at some point), then it would have to be excluded
> from this list. If that were true, then I'd also follow that X-Group-Names
> become X-User-Group-Names, so that it might be more clear that they belong
> to the X-User-Domain-*.
>
>  Option B) This is probably the most complex solution, but also the most
> explicit. I have no idea how this interface would look in terms of headers
> using current conventions. If we're going to break conventions, then I'd
> want to pass a id+domain_id+name for each group reference. So, rather than
> including a list of na

Re: [openstack-dev] [Glance][Keystone] Glance and trusts

2015-06-05 Thread Dolph Mathews
On Thu, Jun 4, 2015 at 1:54 AM, David Chadwick 
wrote:

> I did suggest another solution to Adam whilst we were in Vancouver, and
> this mirrors what happens in the real world today when I order something
> from a supplier and a whole supply chain is involved in creating the end
> product that I ordered. This is not too dissimilar to a user requesting
> a new VM. Essentially each element in the supply chain trusts the two
> adjacent elements. It has contracts with both its customers and its
> suppliers to define the obligations of each party. When something is
> ordered from it, it trusts the purchaser, and on the strength of this,
> it will order from its suppliers. Each element may or may not know who
> the original customer is, but if it needs to know, it trusts the
> purchaser to tell it. Furthermore the customer does not need to delegate
> any of his/her permissions to his/her supplier. If we used such a system
> of trust between Openstack services, then we would not need delegation
> of authority and "trusts" as they are implemented today. It could
> significantly simplify the interactions between OpenStack services.
>

+1! I feel like this is the model that we started with in OpenStack, and
have grown additional complexity over time without much benefit.


>
> regards
> David
>
> On 03/06/2015 21:03, Adam Young wrote:
> > On 06/02/2015 12:57 PM, Mikhail Fedosin wrote:
> >> Hello! I think it's a good time to discuss implementation of trusts in
> >> Glance v2 and v3 api.
> >>
> >> Currently we have two different situations during image creation where
> >> our token may expire, which leads to unsuccessful operation.
> >>
> >> First is connection between glance-api and glance-registry. In this
> >> case we have a solution (https://review.openstack.org/#/c/29967/) -
> >> use_user_token parameter in glance-api.conf, but it is True by default
> >> . If it's changed to False then glance-api will use its own
> >> credentials to authorize in glance-registry and it prevents many
> >> possible issues with user token expiration. So, I'm interested if
> >> there are some performance degradations if we change use_user_token to
> >> False and what are the reasons against making it the default value.
> >>
> >> Second one is linked with Swift. Current implementation uploads chunks
> >> one by one and requires authorization each time. It may lead to
> >> problems: for example we have to upload 100 chunks, after 99th one,
> >> token expired and glance can't upload the last one, catches an
> >> exception and tries to remove stale chunks from storage. Of course it
> >> will fail, because token is not valid anymore, and that's why there
> >> will be 99 garbage objects in the storage.
> >> With Single-tenant mode glance uses its own credentials to upload
> >> files, so it's possible to create new connection on each chunk upload
> >> or catch Unauthorized exception and recreate connections only in that
> >> cases. But with Multi-tenant mode there is no way to do it, because
> >> user credentials are required. So it seems that trusts is the only one
> >> solution here.
> > The problem with using trusts is that it would need to be created
> > per-user, and that is going to be expensive.  It would be possible, as
> > Heat does something of this nature:
> >
> > 1. User calls glance,
> > 2. Glance creates a trust with some limitation, either time or number of
> > uses
> > 3.  Trusts are used for all operations with swift.
> > 4. Glance should clean up the trust when it is complete.
> >
> > I don't love the solution, but I think it is the best we have.  Ideally
> > the user would opt in to the trust, but in this case, it is kindof
> > implicit by them calling the API.
> >
> >
> > We should limit the trust creation to only have those roles (or a
> > subset) on the token used to create the trust.
> >
> >
> >
> >
> >> I would be happy to hear your opinions on that matter. If you know
> >> other situations where trusts are useful or some other approaches
> >> please share.
> >>
> >> Best regards,
> >> Mike Fedosin
> >>
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___

Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-05 Thread Tristan Cacqueray
On 06/05/2015 05:46 AM, Thierry Carrez wrote:
> So.. summarizing the various options again:
> 
> Plan A
> Just drop stable point releases.
> (-) No more release notes
> (-) Lack of reference points to compare installations
> 
> Plan B
> Push date-based tags across supported projects from time to time.
> (-) Encourages to continue using same version across the board
> (-) Almost as much work as making proper releases
> 
> Plan C
> Let projects randomly tag point releases whenever
> (-) Still a bit costly in terms of herding cats
> 
> Plan D
> Drop stable point releases, publish per-commit tarballs
> (-) Requires some infra changes, takes some storage space
> 
> Plans B, C and D also require some release note / changelog generation
> from data maintained *within* the repository.
> 
> Personally I think the objections raised against plan A are valid. I
> like plan D, since it's more like releasing every commit than "not
> releasing anymore". I think it's the most honest trade-off. I could go
> with plan C, but I think it's added work for no additional value to the
> user.
> 
> What would be your preferred option ?
> 

Apologize if I'm off-track here, but Plan A and D seems like a steep
change for users. Imo having stable release (at least between two
releases) is a valid use-case.

I guess Plan C is the preferred option, along with a "stable-release"
tag, projects that opt-in would have the responsibility to create stable
branches and maintain them.

Regards,
Tristan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Nova API in Kilo and Beyond

2015-06-05 Thread Chris Dent

On Fri, 5 Jun 2015, Sean Dague wrote:


Thanks for your time,


Thanks for writing that up.

I recognize that microversions exist and are as they are so I don't
want to derail, but my curiosity was piqued:

Riddle me this: If Microversions are kind of like content-negotiation
(and we love content-negotiation) for APIs, why not just use content-
negotiation instead of a header? Instead of:

   X-OpenStack-Nova-API-Version: 2.114

do (media type made up and not suggesting it as canonical):

   Accept: application/openstack-nova-api-2.114+json

or even

   Accept: application/vnd.openstack-nova-api+json; version=2.114

(and similar on the content-type header). There is precedent for
this sort of thing in, for example, the github api.

(I'll not[1] write about "srsly, can we please stop giving Jackson the
Absent so much freaking power".)

[1] whoops
--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Updating Our Concept of Resources

2015-06-05 Thread Alexis Lee
Good summary, Ed!

Ed Leafe said on Wed, Jun 03, 2015 at 07:53:02AM -0500:
> > I totally agree the scheduler doesn't have to know anything about
> > flavors though. We should push them out to request validation in the
> > Nova API. This can be considered part of cleaning up the scheduler API.
>
> This idea was also discussed and seemed to get a lot of support. ...
> Don Dugger volunteered to write up a spec for removing flavors from
> the scheduler.

Yay <3

> So did I miss anything? :)

One thing, we also discussed allowing a request to include multiple
flavors. This could be used to mix-and-match pizza-base-flavors with
pizza-topping-flavors. So to escape the pizza metaphor briefly, you
could take XLarge VPS resources with a Medium SSD package and a Small
network package. This approach prevents too much resource fragmentation
and maintains a level of convenience for the user while allowing for an
exponential number of flavor combinations.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-05 Thread Alan Pevec
> If the downsteam consumer has their own extra patches ontop of the
> stable branch, then it seems D is even less useful than A.

It is not - downstream (speaking for RDO) I would keep Version:
2015.1.N where N is stable patch# so that we have a common reference
point with other distros.
Our patches, if any, then come on top this, so it's clear which
patches are added by distro.

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-05 Thread Matthew Thode
On 06/05/2015 07:46 AM, Thierry Carrez wrote:
> So.. summarizing the various options again:
> 
> Plan A
> Just drop stable point releases.
> (-) No more release notes
> (-) Lack of reference points to compare installations
> 
> Plan B
> Push date-based tags across supported projects from time to time.
> (-) Encourages to continue using same version across the board
> (-) Almost as much work as making proper releases
> 
> Plan C
> Let projects randomly tag point releases whenever
> (-) Still a bit costly in terms of herding cats
> 
> Plan D
> Drop stable point releases, publish per-commit tarballs
> (-) Requires some infra changes, takes some storage space
> 
> Plans B, C and D also require some release note / changelog generation
> from data maintained *within* the repository.
> 
> Personally I think the objections raised against plan A are valid. I
> like plan D, since it's more like releasing every commit than "not
> releasing anymore". I think it's the most honest trade-off. I could go
> with plan C, but I think it's added work for no additional value to the
> user.
> 
> What would be your preferred option ?
> 
Either A or D are what I will likely do.  Also, I'll likely build infra
on my end to generate ebuilds like B proposes.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Mellanox CI Issues

2015-06-05 Thread Moshe Levi
Hi Dan,

I just want to update you that the report success after short 
less-than-a-minute run is Zuul problem  see [1].
This is happened because we apply filter rules to run only on PCI code to 
reduce load on our CI System.

Unfortunately  we missed this issue  in our monitoring because all the job in 
the Jenkins were looking good and 
just Zuul reports also when no Jenkins job was scheduled.

I talked also with John Garbutt and it seem that the recommendation is to 
filter only doc and test folders, 
but still I would like to filter some other folders like
1. nova/nova/virt/hyperv
2. nova/nova/virt/ironic
3. nova/nova/virt/vmwareapi
4. nova/nova/virt/xenapi

Is there any Nova CI guidelines  for file filtering?

Again I would like to apologize for the inconvenient.

[1] https://review.openstack.org/#/c/188383/

Thank,
Moshe Levi.

-Original Message-
From: Lenny Verkhovsky 
Sent: Monday, June 01, 2015 11:21 PM
To: Dan Smith; OpenStack Development Mailing List (not for usage questions)
Cc: Moshe Levi
Subject: RE: [nova] Mellanox CI Issues

Hi Dan,
I disabled Mellanox Nova CI from commenting and will check this issue first 
thing tomorrow morning.
Thanks and my deepest apologies.

Lenny Verkhovsky


-Original Message-
From: Dan Smith [mailto:d...@danplanet.com] 
Sent: Monday, June 01, 2015 7:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Moshe Levi; Lenny Verkhovsky
Subject: [nova] Mellanox CI Issues

Hi,

Mellanox CI has been broken for a while now. Test runs are reported as 
"successful" after an impossibly short less-than-a-minute run. Could the owners 
of this please take a look and address the problem? At least disabling 
commenting while working on the issue would be helpful.

Also, on success, the bot doesn't post the log files, which is (a) inconsistent 
with other test bots and (b) not very helpful for validating that success is 
real. This is especially relevant right now, given that we know the success 
reports are erroneous at the moment.

This is the second time (in recent memory) that Mellanox CI has gone off the 
rails for a decent amount of time without being noticed by the owners. If this 
continues, I'll be in favor of removing commenting privileges for this account 
and will be hesitant to throw in my support for re-enabling it. Running CI 
against gerrit comes with serious responsibility for monitoring!

Thanks!

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-05 Thread Thierry Carrez
Daniel P. Berrange wrote:
> On Fri, Jun 05, 2015 at 02:46:07PM +0200, Thierry Carrez wrote:
>> Plan A
>> Just drop stable point releases.
>> (-) No more release notes
>> (-) Lack of reference points to compare installations
>>
>> Plan B
>> Push date-based tags across supported projects from time to time.
>> (-) Encourages to continue using same version across the board
>> (-) Almost as much work as making proper releases
>>
>> Plan C
>> Let projects randomly tag point releases whenever
>> (-) Still a bit costly in terms of herding cats
>>
>> Plan D
>> Drop stable point releases, publish per-commit tarballs
>> (-) Requires some infra changes, takes some storage space
>>
>> Plans B, C and D also require some release note / changelog generation
>> from data maintained *within* the repository.
>>
>> [...]
> 
> I don't see a whole lot of difference between plan A and D.
> Publishing per-commit tarballs is merely saving the downstream
> users the need to run a 'git archive' command, and providing
> some auto-generated changelog that's already available from
> 'git log'.

I guess the main difference is that we would still "release" something
(i.e. generate, version, publish and archive a tarball) in case of D,
which makes it easier to reuse a known point. It would also exhibit
version numbers more visibly, which makes referencing them slightly
easier (currently you have to look into the generated, transient
$PROJECT-stable-$SERIES.tar.gz tarball to see that version number).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release] summit session summary: Release Versioning for Server Applications

2015-06-05 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2015-05-29 17:38:04 -0400:
> This message is a summary of the notes from the “Release Versioning Servers” 
> discussion held during the release management/QA/infrastructure meetup period 
> on Friday morning at the summit, along with some commentary I thought of as I 
> was typing them up. There is no etherpad for that session, but the notes we 
> took were captured as a photo of the whiteboard, which you can see at 
> http://doughellmann.com/2015/05/29/openstack-server-version-numbering.html
> 
> tl;dr: To simplify release management, especially for projects releasing more 
> than one time per cycle, we would like projects currently using date-based 
> versioning to move to semver. Existing projects following some form of semver 
> should keep doing what they’re doing.
> 
> 
> Moving away from date-based release numbers removes the confusion about what 
> an update to a release occurring in the following year means. It also makes 
> it easier for us to publish server releases through PyPI, if we choose to do 
> that (it was discussed, but I don’t remember a firm agreement). The 
> transition introduces some complications, but we think it is possible to 
> handle them all.
> 
> As with our other semver projects, we will use pbr’s interpretation of the 
> semver rules (http://docs.openstack.org/developer/pbr/semver.html) for minor 
> updates and patch releases. I don’t believe we discussed whether to increment 
> the major version during each cycle as we have been doing. Under the semver 
> rules that would indicate incompatibility, and we may not want to signal that 
> arbitrarily. We should discuss that further leading up to the M summit, when 
> we start preparing for the *next* cycle.
> 
> Since kilo was release 11, I proposed we start with version 12.0.0 for 
> everyone’s next release and proceed from there following semver rules. This 
> will result in resetting the version numbers to values lower than they are 
> currently (12 < 2015), but the distributions can prepend an epoch value to 
> their version number to ensure upgrades work properly. It will also mean that 
> project versions will drift apart over time, since some projects such as 
> Ironic will start having more intermediate releases.

Thierry and I had a conversation about this today, and he has
convinced me that since project versions will diverge anyway, we
shouldn't start out using the same version for everyone. So instead
of picking 12 for all projects, we will look at how many releases
a project has had and add 1 to produce the next version. That will
spread out the version numbers now, and reduce the tendency for
consumers of the projects to assume that the version numbers match
up with the release cycles. We will work with the release liaisons and
PTLs to figure out the version numbers for projects as we get close to
the L1 milestone.

Doug

> 
> We will still use stable branches, and stable release versions will follow 
> the same rules so it should always be clear what versions are upgrades of 
> what previous releases.
> 
> The library projects should already be using semver. In some cases they 
> aren’t, but we’ll be fixing that separately this cycle. Server projects 
> already using semver-like rules should continue to follow their existing 
> patterns, unless they want to adopt this new process, in which case the 
> release team will be happy to help them set up whatever they need.
> 
> Please let me know if you think I missed any details, or misunderstood 
> something as agreed to. Or, of course, if you weren’t there and see something 
> wrong with the plan after looking at it now.
> 
> Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][security] Enable user password complexity verification

2015-06-05 Thread Brant Knudson
On Wed, Jun 3, 2015 at 6:49 AM, David Stanek  wrote:

>
> On Wed, Jun 3, 2015 at 6:04 AM liusheng  wrote:
>
>>  Thanks for this topic, also, I think it is similar situation when
>> talking about keystone users, not only the instances's password.
>>
>>
> In the past we've talked about having more advanced password management
> features in Keystone (complexity checks, rotation, etc). The end result is
> that we are not adding them because we would like to get away from managing
> users in Keystone that way. Instead we are pushing for users to integrate
> Keystone with more fully featured identity products.
>
>

We typically reject it for our SQL backend implementation since there are
other ways to configure the Keystone that support the functionality
already. You can configure keystone to use an LDAP backend or you can use
federation. So there's no reason for us to re-implement and support all
this functionality.

That said, if there was a python library that did password complexity
validation that nova was using and it only required a couple of lines of
code in keystone to support it I wouldn't be against it.

- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa] Empty "Build succeeded" when filtering jobs

2015-06-05 Thread Evgeny Antyshev

Hello!

Please, look at the change:
https://review.openstack.org/188383


On 03.06.2015 18:56, James E. Blair wrote:

Evgeny Antyshev  writes:


Some CIs like to narrow their scope to a certain set of files.
For that, they specify file mask on per-job basis. So there appear
annoying comments with only "Build succeeded".
(an example complaint:
http://lists.openstack.org/pipermail/openstack-dev/2015-June/065367.html)

Moreover, most of CIs which don't bother filtering, make lots of
comments to doc/unittest changes, which is also wrong.
(seehttps://review.openstack.org/#/c/152006, and most of CIs don't run 
unittests)
What if Zuul would not comment when no real jobs run?
The only meaningful task that is done is merging the patch,
but anyway in case of merge failure there should be "Merge failed" comment.

In case of no objections, I'll make corresponding change in zuul.

Sounds good to me.  In fact, if you specify no jobs for a
project-pipeline in Zuul, it does nothing (which is why we have the noop
jobs).  Arguably the fact that when the job set reduces to nothing due
to filtering the change is still enqueued is a bug.

I will note that this may complicate efforts to track the performance of
third-party CI systems, especially determining whether they are
reporting on all changes.  I still think you should make the change; the
reporting systems may just need to be a little more sophisticated
(perhaps they should only look at changes where OpenStack's CI system
ran certain jobs).

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Best regards,
Evgeny Antyshev.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all]Big Tent Mode within respective projects

2015-06-05 Thread Zhipeng Huang
Nice pointer Boris ! :)

On Fri, Jun 5, 2015 at 3:32 AM, Jay Pipes  wrote:

> On 06/04/2015 07:46 AM, Boris Pavlovic wrote:
>
>> Jay,
>>
>>
>> At this time, Neutron is the only project that has done any
>> splitting out of driver and advanced services repos. Other projects
>> have discussed doing this, but, at least in Nova, that discussion
>> was put on hold for the time being. Last I remember, we agreed that
>> we would clean up, stabilize and document the virt driver API in
>> Nova before any splitting of driver repos would be feasible.
>>
>>
>> Imho not only Neutron has this. ;)
>> Rally support out of tree plugins as well and I saw already some third
>> party repos:
>> https://github.com/stackforge/haos
>>
>
> Yup, good point, Boris!
>
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] [docs] Spec updated for API reference info and app dev guides

2015-06-05 Thread Anne Gentle
Hi all,

I wanted to point out the updated specification for the approach we're
taking for the API docs, basically scraping the code with a WSGI layer to
build Swagger files. See
https://review.openstack.org/#/c/177934/5/specs/liberty/api-site.rst for
the gory details. A huge thanks to Tom Fifield and Russell Sims for
ideating and working on this solution.
Please review, comment, and respond. I'm listening and looking forward to
working across projects on this update to the API docs.

Thanks,
Anne

-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-05 Thread Jeremy Stanley
On 2015-06-05 14:56:30 +0200 (+0200), Thierry Carrez wrote:
[...]
> I was wondering if we could switch to post-versioning on stable
> branches, and basically generate:
> 
> "2015.1.0.post38"
[...]

I think the recommendation from the PyPI maintainers is to not use
.postN suffixes since they are intended to indicate non-code
changes.

We could look at a mode for PBR which just adds a fourth unqualified
integer component that increments per patchset from the last known
tag (though that's technically not SemVer any longer)? Or maybe we
could have a PBR mode that just increments the third integer
component of the version for every commit without needing a new tag?
Or we could just push a corresponding tag for each merged commit
(though that essentially brings us back to the "tag arbitrarily"
option)?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-05 Thread Daniel P. Berrange
On Fri, Jun 05, 2015 at 02:46:07PM +0200, Thierry Carrez wrote:
> So.. summarizing the various options again:
> 
> Plan A
> Just drop stable point releases.
> (-) No more release notes
> (-) Lack of reference points to compare installations
> 
> Plan B
> Push date-based tags across supported projects from time to time.
> (-) Encourages to continue using same version across the board
> (-) Almost as much work as making proper releases
> 
> Plan C
> Let projects randomly tag point releases whenever
> (-) Still a bit costly in terms of herding cats
> 
> Plan D
> Drop stable point releases, publish per-commit tarballs
> (-) Requires some infra changes, takes some storage space
> 
> Plans B, C and D also require some release note / changelog generation
> from data maintained *within* the repository.
> 
> Personally I think the objections raised against plan A are valid. I
> like plan D, since it's more like releasing every commit than "not
> releasing anymore". I think it's the most honest trade-off. I could go
> with plan C, but I think it's added work for no additional value to the
> user.

I don't see a whole lot of difference between plan A and D.
Publishing per-commit tarballs is merely saving the downstream
users the need to run a 'git archive' command, and providing
some auto-generated changelog that's already available from
'git log'.

If the downsteam consumer has their own extra patches ontop of the
stable branch, then it seems D is even less useful than A.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Domain and Project naming

2015-06-05 Thread Adam Young

On 06/04/2015 11:13 PM, Jamie Lennox wrote:


- Original Message -

From: "Adam Young" 
To: "OpenStack Development Mailing List" 
Sent: Thursday, 4 June, 2015 2:25:52 PM
Subject: [openstack-dev] [Keystone] Domain and Project naming

With Hierarchical Multitenantcy, we have the issue that a project is
currentl restricted in its naming further than it should be.  The domain
entity enforces that all project namess under the domain domain be
unique, but really what we should say is that all projects under a
single parent project be unique.  However, we have, at present, an API
which allows a user to specify the domain either name or id and project
again, either by name or ID, but here we care only about the name.  This
can be used either in specifying the token, or in operations ion the
project API.

We should change projec naming to be nestable, and since we don't have a
delimiter set, we should expect the names to be an array, where today we
might have:

  "project": {
  "domain": {
  "id": "1789d1",
  "name": "example.com"
  },
  "id": "263fd9",
  "name": "project-x"
  }

we should allow and expect:

  "project": {
  "domain": {
  "id": "1789d1",
  "name": "example.com"
  },
  "id": "263fd9",
  "name": [ "grandpa", "dad", "daughter"]
  }

This will, of course, break Horizon and lots of other things, which
means we need a reasonable way to display these paths.  The typical UI
approach is a breadcrumb trail, and I think something where we put the
segments of the path in the UI, each clickable, should be
understandable: I'll defer to the UX experts if this is reasonable or not.

The alternative is that we attempt to parse the project names. Since we
have not reserved a delimeter, we will break someone somewhere if we
force one on people.


As an alternative, we should start looking in to following DNS standards
for naming projects and hosts.  While a domain should not be required to
be a DNS registred domain name, we should allow for the case where a
user wants that to be the case, and to synchronize nam,ing across
multiple clouds.  In order to enforce this, we would have to have an
indicator on a domain name that it has been checked with DNS;  ideally,
the user would add a special SRV or Text record or something that
Keystone could use to confirm that the user has oked this domain name
being used by this cloud...or something perhaps with DNSSEC, checking
that auser has permission to assign a specific domain name to a set of
resources in the cloud.  If we do that, the projects under that domain
should also be valid DNS subzones, and the hosts either  FQDNs or some
alternate record...this would tie in Well with Designate.

Note that I am not saying "force this"  but rather "allow this" as it
will simplify the naming when bursting from cloud to cloud:  the Domain
and project names would then be synchronized via DNS regardless of
hosting provider.

As an added benefit, we could provide a SRV or TEXT record (or some new
URL type..I heard one is coming) that describes where to find the home
Keystone server for a specified domain...it would work nicely with the
K2K strategy.

If we go with DNS project naming, we can leave all project names in a
flat string.


Note that the DNS approach can work even if the user does not wish to
register their own DNS.  A hosting provider (I'll pick dreamhost, cuz  I
know they are listening)  could say the each of their tenants picks a
user name...say that mine i admiyo,  they would then create a subdomain
of admiyo.dreamcompute.dreamhost.com.  All of my subprojects would then
get additional zones under that.  If I were then to burst from there to
Bluebox, the Keystone domain name would be the one that I was assigned
back at Dreamhost.

Back up. Is our current restrictions a problem?
I think it will trip people up.  It is not an intentional design, but a 
limitation due to historical accident.




Even with hierarchical projects is it a problem to say that a project name 
still must be unique per domain? I get that in theory you might want to be able 
to identify a nested project by name under other projects but that's not 
something we have to allow immediately.
I thkn so.  I think a very common pattern will be having one project for 
a major application (Trello, Wordpress, Kubernetes) with "dev, qa, 
staging, live"  under it, and now we are telling people they can't have it.




I haven't followed the reseller case closely but in any situation where you had 
off control like that we are re-establishing a domain and so in a multitenancy 
situation each domain can still use their own project names.
Yeah, that is not a problem here:  if we nest under a domain, the unique 
is not an issue.


I feel like discussions around nested naming schemes and tieing domains to DNS 
is really prematur

Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-05 Thread Thierry Carrez
Jeremy Stanley wrote:
> On 2015-06-01 15:57:17 + (+), Jeremy Stanley wrote:
> [...]
>> The biggest hurdle is that we'd need a separate upload job name
>> for those since the current version of Zuul lacks a way to run a
>> particular job for different branches in different pipelines (we'd
>> want to do versioned uploads for all pre-release and release
>> pipeline refs, but also for post pipeline refs only when the
>> branch name is like stable/.*).
> 
> Actually, scratch that. It's a bit more complicated since the post
> pipeline isn't actually branch-relevant. We'd need to tweak the
> tarball and wheel creation scripts to check the containing branch,
> like we do for some proposal jobs. Still, I think it wouldn't be too
> hard.

Exploring plan D, I was looking at the versions we currently generate on
stable branches and I think they would not convey the right message:

"2015.1.1.dev38"

- but there won't be a 2015.1.1 !
- but this is not "under development" !

I was wondering if we could switch to post-versioning on stable
branches, and basically generate:

"2015.1.0.post38"

... which would convey the right message.

I /think/ all it would take would be, as the first post-release commit
to the stable branch, to remove the preversion from setup.cfg (rather
than bump it to the next .1). I think pbr would switch to postversioning
in that case and generate postX versions from the last tag in the branch.

Not sure we would do that for stable/kilo, though, since we already
pushed 2015.1.1.devX versions in the wild.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-05 Thread Thierry Carrez
So.. summarizing the various options again:

Plan A
Just drop stable point releases.
(-) No more release notes
(-) Lack of reference points to compare installations

Plan B
Push date-based tags across supported projects from time to time.
(-) Encourages to continue using same version across the board
(-) Almost as much work as making proper releases

Plan C
Let projects randomly tag point releases whenever
(-) Still a bit costly in terms of herding cats

Plan D
Drop stable point releases, publish per-commit tarballs
(-) Requires some infra changes, takes some storage space

Plans B, C and D also require some release note / changelog generation
from data maintained *within* the repository.

Personally I think the objections raised against plan A are valid. I
like plan D, since it's more like releasing every commit than "not
releasing anymore". I think it's the most honest trade-off. I could go
with plan C, but I think it's added work for no additional value to the
user.

What would be your preferred option ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion guideline in API-WG

2015-06-05 Thread Sean Dague
On 06/05/2015 01:28 AM, Adrian Otto wrote:
> 
>> On Jun 4, 2015, at 11:03 AM, Devananda van der Veen
>> mailto:devananda@gmail.com>> wrote:
>>
>>
>> On Jun 4, 2015 12:00 AM, "Xu, Hejie" > > wrote:
>> >
>> > Hi, guys,
>> >  
>> > I’m working on adding Microversion into the API-WG’s guideline which
>> make sure we have consistent Microversion behavior in the API for user.
>> > The Nova and Ironic already have Microversion implementation, and as
>> I know Magnum https://review.openstack.org/#/c/184975/ is going to
>> implement Microversion also.
>> >  
>> > Hope all the projects which support( or plan to) Microversion can
>> join the review of guideline.
>> >  
>> > The Mircoversion specification(this almost copy from nova-specs):
>> https://review.openstack.org/#/c/187112
>> > And another guideline for when we should bump Mircoversion
>> https://review.openstack.org/#/c/187896/
>> >  
>> > As I know, there already have a little different between Nova and
>> Ironic’s implementation. Ironic return min/max version when the requested
>> > version doesn’t support in server by http-headers. There isn’t such
>> thing in nova. But that is something for version negotiation we need
>> for nova also.
>> > Sean have pointed out we should use response body instead of http
>> headers, the body can includes error message. Really hope ironic team
>> can take a
>> > look at if you guys have compelling reason for using http headers.
>> >  
>> > And if we think return body instead of http headers, we probably
>> need think about back-compatible also. Because Microversion itself
>> isn’t versioned.
>> > So I think we should keep those header for a while, does make sense?
>> >  
>> > Hope we have good guideline for Microversion, because we only can
>> change Mircoversion itself by back-compatible way.
>>
>> Ironic returns the min/max/current API version in the http headers for
>> every request.
>>
>> Why would it return this information in a header on success and in the
>> body on failure? (How would this inconsistency benefit users?)
>>
>> To be clear, I'm not opposed to *also* having a useful error message
>> in the body, but while writing the client side of api versioning,
>> parsing the range consistently from the response header is, IMO,
>> better than requiring a conditional.
>>
> +1. I fully agree with Devananda on this point. Use the headers
> consistently, and add helpful errors into the body only as an addition
> to that behavior, not a substitute.

I think the difference between Nova and Ironic here is that Nova doesn't
send all the headers all the time in the final implementation (that part
of the spec evolved out I think). Part of that was pressure about Header
bloat that people were concerned about, as that impacts caching layers.

I would a agree that if Ironic is sending all the headers all the time,
that's fine. However, for consistency it would be great to also put a
real body that explains the issue as well, as headers are not the first
place people look when things go wrong, and are often not logged by
client side tools on errors (where the body would be).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] The Nova API in Kilo and Beyond

2015-06-05 Thread Sean Dague
One of the things we realized at the summit was that we'd been working
through a better future for the Nova API for the past 5 cycles, gotten
somewhere quite useful, but had really done a poor job on communicating
what was going on and why, and where things are headed next.

I've written a bunch of English to explain it (which should be on the
planet shortly as well) -
https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/ (with
lots of help from Ed Leaf, John Garbutt, and Matt Gillard on content and
copy editing).

Yes, this is one of those terrible mailing list posts that points people
to read a thing not on the list (I appologize). But at 2700 words, I
think you'll find it more comfortable to read not in email.

Discussion is welcome here for any comments folks have. Some details
were trimmed for the sake of it not being a 6000 word essay, and to make
it accessible to people that don't have a ton of Nova internals
knowledge. We'll do our best to field questions, all of which will be
integrated into the eventual dev ref version of this.

Thanks for your time,

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Architecture Diagrams in ascii art?

2015-06-05 Thread Markus Zoeller
Joe Gordon  wrote on 05/16/2015 03:33:35 AM:
> 
> After further investigation in blockdiag, is useless for moderately 
> complex diagrams. 
> 
> Here is my attempt at graphing nova [0], but due to a blockdiag bug 
> from 2013, [1] it is impossible to clearly read. For example, in the 
> diagram there is not supposed to be any arrow between the conductor 
> and cinder/glance/neutron. I looked into dia, and while it has plenty 
> of diagram shapes it doesn't have a good template for software 
> architecture, but maybe there is a way to make dia work. And that just
>  leaves SVG graphics,  after spending an hour or two  playing around 
> with Inkscape and it looks promising (although the learning curve is 
> pretty steep). Here is my first attempt in Inkscape [2].
> 
> [0] http://interactive.blockdiag.com/?
> 
compression=deflate&src=eJx9UMtOAzEMvOcrrL0vPwCtVHYryoG2EvSEOHiTtI0axavEFQK0_47dB1oOkEuSmbE9ni6SPbiAO_gyAJviM7yWPfYeJlChZcrV2-2VqafQxOAT62u2fhwTC8rhk9KIkWOMfuBOC0NyPtdLf-
> RMqX6ImKwXWbN6Wm9e5v9ppNcu07EXi_puVsv2LL-U6jAd8wsSTByJV-QgtibQU-
> aMgcft4G-
> 
RcBE7HzWH9h7QWl9KpaMKf0SNxxGzdyfkElgMSVcCS5GyFnYR7aESxCFjh8WPwt1Gerd7zHxzJc9J_2wiW8r93Czm7cnOYAZjhm9d4H0M
> [1] https://bitbucket.org/blockdiag/blockdiag/issue/45/arrows-collisions
> [2] https://i.imgur.com/TXwsRoB.png
> 
>  
> Thanks,
> John

Maybe the "graphviz" extension for Sphinx could be usefull [1].
It's better in displaying edges/dependencies [2]. 

[1] http://sphinx-doc.org/ext/graphviz.html
[2] http://graphviz.org/content/world

Regards,
Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Regarding Flow classifiers existing proposals

2015-06-05 Thread Miguel Angel Ajo
Gal, if you have some time to coordinate this with the service 
chaining/firewall folks and start a spec, it’d be amazing.


Best regards,
Miguel Angel Ajo



On Friday 5 June 2015 at 12:42, Vikram Choudhary wrote:

> Hi Gal,
>  
> It's really nice that you are also interested. Myself and Miguel was also 
> talking about this over the summit ;)
> Let's take care of this together ;)
>  
> Thanks
> Vikram
>  
>  
> On Fri, Jun 5, 2015 at 3:45 PM, Gal Sagie  (mailto:gal.sa...@gmail.com)> wrote:
> > Another use case is for security/firewall classifiers.
> >  
> > I agree with this and i think me and Miguel talked about it in the summit, 
> > but in order for this to go
> > forward someone need to start creating a spec and managing this effort.
> >  
> > Since you proposed it first Vikram, will you do it?
> > If not i will gladly take this on myself.
> >  
> > Gal.
> >  
> >  
> > On Fri, Jun 5, 2015 at 12:11 PM, Vikram Choudhary 
> > mailto:vikram.choudh...@huawei.com)> wrote:
> > > Thanks Miguel!  
> > >   
> > > From: Miguel Angel Ajo [mailto:mangel...@redhat.com]  
> > > Sent: 05 June 2015 14:12
> > > To: Vikram Choudhary
> > > Cc: azama-y...@mxe.nes.nec.co.jp (mailto:azama-y...@mxe.nes.nec.co.jp); 
> > > Henry Fourie; Cathy Zhang; arma...@gmail.com (mailto:arma...@gmail.com); 
> > > Dongfeng (C); Kyle Mestery; openstack-dev@lists.openstack.org 
> > > (mailto:openstack-dev@lists.openstack.org); Dhruv Dhody; Kalyankumar 
> > > Asangi
> > > Subject: [neutron] Regarding Flow classifiers existing proposals  
> > >   
> > >   
> > >  
> > >   
> > >  
> > > Added openstack-dev, where I believe this conversation must live.
> > >  
> > >   
> > >  
> > > I totally agree on this, thank you for bringing up this conversation. 
> > > This is not something we want to do for QoS this cycle, but probably next 
> > > cycle.
> > >  
> > >   
> > >  
> > > Anyway, an unified data model and API to create/update classifiers will 
> > > not only be beneficial from the code duplication point of view, but will 
> > > also provide a better user experience.
> > >  
> > >   
> > >  
> > > I’m all for it.
> > >  
> > >   
> > >  
> > > Best regards,
> > >  
> > > Miguel Ángel Ajo
> > >  
> > >  
> > >   
> > >  
> > > On Friday 5 June 2015 at 09:57, Vikram Choudhary wrote:
> > > >  
> > > > Dear All,
> > > >  
> > > >  
> > > >   
> > > >  
> > > >  
> > > > There are multiple proposal floating around flow classifier rules for 
> > > > Liberty [1], [2] and [3].
> > > >  
> > > >  
> > > > I feel we all should work together and try to address all our use case 
> > > > having a unified framework rather than working separately achieving the 
> > > > same  goal.
> > > >  
> > > >  
> > > >   
> > > >  
> > > >  
> > > > Moreover, I can find the proposal for flow classifier as defined by the 
> > > > existing SFC [2] proposal is too generic and could address all the use 
> > > > cases by minor extension’s.
> > > >  
> > > >  
> > > >   
> > > >  
> > > >  
> > > > In this regard, I would like all to come forward, exchange their 
> > > > thoughts, work together and make it happen good the first go rather 
> > > > doing the same effort separately and end up in duplicating code & 
> > > > effort L.
> > > >  
> > > >  
> > > > I always feel less code will make our life happy in the long run ;)
> > > >  
> > > >  
> > > >   
> > > >  
> > > >  
> > > > Please let me know about your views.
> > > >  
> > > >  
> > > >   
> > > >  
> > > >  
> > > > [1] Add Neutron API extensions for packet forwarding
> > > >  
> > > >  
> > > >   https://review.openstack.org/#/c/186663/
> > > >  
> > > >  
> > > >   
> > > >  
> > > > [2] Neutron API for Service Chaining [Flow Filter resource]
> > > >  
> > > >   
> > > > https://review.openstack.org/#/c/177946/6/specs/liberty/neutron-api-for-service-chaining.rst
> > > >  
> > > >  
> > > >   
> > > >  
> > > >  
> > > > [3] QoS API Extension [Defines classifier rule in QoSRule. Classifier 
> > > > rule can really grow big in the long run]:
> > > >  
> > > >  
> > > >   
> > > > https://review.openstack.org/#/c/88599/10/specs/liberty/qos-api-extension.rst
> > > >  
> > > >  
> > > >   
> > > >  
> > > >  
> > > > Thanks
> > > >  
> > > >  
> > > > Vikram
> > > >  
> > > >  
> > > >  
> > > >  
> > > >  
> > >  
> > >   
> > >  
> > >  
> > >  
> > >  
> > >  
> > >  
> > > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: 
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> > > (http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >  
> >  
> >  
> >  
> > --  
> > Best Regards ,
> >  
> > The G.  
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@li

Re: [openstack-dev] [neutron] Regarding Flow classifiers existing proposals

2015-06-05 Thread Vikram Choudhary
Hi Gal,

It's really nice that you are also interested. Myself and Miguel was also
talking about this over the summit ;)
Let's take care of this together ;)

Thanks
Vikram

On Fri, Jun 5, 2015 at 3:45 PM, Gal Sagie  wrote:

> Another use case is for security/firewall classifiers.
>
> I agree with this and i think me and Miguel talked about it in the summit,
> but in order for this to go
> forward someone need to start creating a spec and managing this effort.
>
> Since you proposed it first Vikram, will you do it?
> If not i will gladly take this on myself.
>
> Gal.
>
>
> On Fri, Jun 5, 2015 at 12:11 PM, Vikram Choudhary <
> vikram.choudh...@huawei.com> wrote:
>
>>  Thanks Miguel!
>>
>>
>>
>> *From:* Miguel Angel Ajo [mailto:mangel...@redhat.com]
>> *Sent:* 05 June 2015 14:12
>> *To:* Vikram Choudhary
>> *Cc:* azama-y...@mxe.nes.nec.co.jp; Henry Fourie; Cathy Zhang;
>> arma...@gmail.com; Dongfeng (C); Kyle Mestery;
>> openstack-dev@lists.openstack.org; Dhruv Dhody; Kalyankumar Asangi
>> *Subject:* [neutron] Regarding Flow classifiers existing proposals
>>
>>
>>
>>
>>
>>
>>
>> Added openstack-dev, where I believe this conversation must live.
>>
>>
>>
>> I totally agree on this, thank you for bringing up this conversation.
>> This is not something we want to do for QoS this cycle, but probably next
>> cycle.
>>
>>
>>
>> Anyway, an unified data model and API to create/update classifiers will
>> not only be beneficial from the code duplication point of view, but will
>> also provide a better user experience.
>>
>>
>>
>> I’m all for it.
>>
>>
>>
>> Best regards,
>>
>> Miguel Ángel Ajo
>>
>>
>>
>> On Friday 5 June 2015 at 09:57, Vikram Choudhary wrote:
>>
>>   Dear All,
>>
>>
>>
>> There are multiple proposal floating around flow classifier rules for
>> Liberty [1], [2] and [3].
>>
>> I feel we all should work together and try to address all our use case
>> having a unified framework rather than working separately achieving the
>> same  goal.
>>
>>
>>
>> Moreover, I can find the proposal for flow classifier as defined by the
>> existing SFC [2] proposal is too generic and could address all the use
>> cases by minor extension’s.
>>
>>
>>
>> In this regard, I would like all to come forward, exchange their
>> thoughts, work together and make it happen good the first go rather doing
>> the same effort separately and end up in duplicating code & effort L.
>>
>> I always feel less code will make our life happy in the long run ;)
>>
>>
>>
>> Please let me know about your views.
>>
>>
>>
>> [1] Add Neutron API extensions for packet forwarding
>>
>>   https://review.openstack.org/#/c/186663/
>>
>>
>>
>> [2] Neutron API for Service Chaining [Flow Filter resource]
>>
>>
>> https://review.openstack.org/#/c/177946/6/specs/liberty/neutron-api-for-service-chaining.rst
>>
>>
>>
>> [3] QoS API Extension [Defines classifier rule in QoSRule. Classifier
>> rule can really grow big in the long run]:
>>
>>
>> https://review.openstack.org/#/c/88599/10/specs/liberty/qos-api-extension.rst
>>
>>
>>
>> Thanks
>>
>> Vikram
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Regarding Flow classifiers existing proposals

2015-06-05 Thread Gal Sagie
Another use case is for security/firewall classifiers.

I agree with this and i think me and Miguel talked about it in the summit,
but in order for this to go
forward someone need to start creating a spec and managing this effort.

Since you proposed it first Vikram, will you do it?
If not i will gladly take this on myself.

Gal.


On Fri, Jun 5, 2015 at 12:11 PM, Vikram Choudhary <
vikram.choudh...@huawei.com> wrote:

>  Thanks Miguel!
>
>
>
> *From:* Miguel Angel Ajo [mailto:mangel...@redhat.com]
> *Sent:* 05 June 2015 14:12
> *To:* Vikram Choudhary
> *Cc:* azama-y...@mxe.nes.nec.co.jp; Henry Fourie; Cathy Zhang;
> arma...@gmail.com; Dongfeng (C); Kyle Mestery;
> openstack-dev@lists.openstack.org; Dhruv Dhody; Kalyankumar Asangi
> *Subject:* [neutron] Regarding Flow classifiers existing proposals
>
>
>
>
>
>
>
> Added openstack-dev, where I believe this conversation must live.
>
>
>
> I totally agree on this, thank you for bringing up this conversation. This
> is not something we want to do for QoS this cycle, but probably next cycle.
>
>
>
> Anyway, an unified data model and API to create/update classifiers will
> not only be beneficial from the code duplication point of view, but will
> also provide a better user experience.
>
>
>
> I’m all for it.
>
>
>
> Best regards,
>
> Miguel Ángel Ajo
>
>
>
> On Friday 5 June 2015 at 09:57, Vikram Choudhary wrote:
>
>   Dear All,
>
>
>
> There are multiple proposal floating around flow classifier rules for
> Liberty [1], [2] and [3].
>
> I feel we all should work together and try to address all our use case
> having a unified framework rather than working separately achieving the
> same  goal.
>
>
>
> Moreover, I can find the proposal for flow classifier as defined by the
> existing SFC [2] proposal is too generic and could address all the use
> cases by minor extension’s.
>
>
>
> In this regard, I would like all to come forward, exchange their thoughts,
> work together and make it happen good the first go rather doing the same
> effort separately and end up in duplicating code & effort L.
>
> I always feel less code will make our life happy in the long run ;)
>
>
>
> Please let me know about your views.
>
>
>
> [1] Add Neutron API extensions for packet forwarding
>
>   https://review.openstack.org/#/c/186663/
>
>
>
> [2] Neutron API for Service Chaining [Flow Filter resource]
>
>
> https://review.openstack.org/#/c/177946/6/specs/liberty/neutron-api-for-service-chaining.rst
>
>
>
> [3] QoS API Extension [Defines classifier rule in QoSRule. Classifier
> rule can really grow big in the long run]:
>
>
> https://review.openstack.org/#/c/88599/10/specs/liberty/qos-api-extension.rst
>
>
>
> Thanks
>
> Vikram
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Voting on the Nova project meeting times

2015-06-05 Thread John Garbutt
On 4 June 2015 at 12:54, John Garbutt  wrote:
> Hi,
>
> We have a regular Nova project meeting with alternating times, as
> described here:
> https://wiki.openstack.org/wiki/Meetings/Nova
>
> I will lean towards no change of the times, given we are used to them.
> But I want to double check there if there is a group of contributors
> we are accidentally excluding due to the times we have picked.
>
> Please vote here:
> http://doodle.com/eyzvnawzv86ubtaw

I forgot to mention... the poll closes at the start of the next Nova
meeting on June 11:
https://wiki.openstack.org/wiki/Meetings/Nova

That's what I promised in the Nova meeting last week.

> If doodle doesn't work in your country, or whatever, do email me
> directly and I can add in your vote. I could have used something
> better, but doodle was easy.
>
> Thanks,
> John
>
> PS
> Its possible the majority choice is the wrong choice for complex
> reasons. This is just the best way I could think of to get some quick
> feedback on the times we are currently using, and possible
> alternatives. Thats just a head up.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Follow up actions from the Summit: please help

2015-06-05 Thread John Garbutt
Hi,

So in the interests of filling up your inbox yet further...

We have lots of etherpads from the summit:
https://wiki.openstack.org/wiki/Design_Summit/Liberty/Etherpads#Nova

I have extracted all the action items here:
https://etherpad.openstack.org/p/YVR-nova-liberty-summit-action-items

Please do add any actions that might be missing.

Matt Riedemann wins the prize[1] for the first[2][3] completed action
item, by releasing python-novaclient with the volume actions
deprecated.

Its has been noted that I greedily took most of the actions for
myself. The name is purely the person who gets to make sure the action
happens. If you want to help (please do help!), contact the person
named, who might be able to hand over that task.

Thanks,
John

[1] its a virtual trophy, here you go: >--|
[2] may not have been the first, but whatever
[3] no, there is no prize for the last person

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Change abandonment policy

2015-06-05 Thread Sanjay Upadhyay
+1 for #3 with N = 1

regards
/sanjay

On Fri, Jun 5, 2015 at 1:27 AM, Mike Dorman  wrote:

>   I vote #2, with a smaller N.
>
>  We can always adjust this policy in the future if find we have to
> manually abandon too many old reviews.
>
>
>   From: Colleen Murphy
> Reply-To: "puppet-openst...@puppetlabs.com"
> Date: Tuesday, June 2, 2015 at 12:39 PM
> To: "OpenStack Development Mailing List (not for usage questions)", "
> puppet-openst...@puppetlabs.com"
> Subject: [puppet] Change abandonment policy
>
>   In today's meeting we discussed implementing a policy for whether and
> when core reviewers should abandon old patches whose author's were
> inactive. (This doesn't apply to authors that want to abandon their own
> changes, only for core reviewers to abandon other people's changes.) There
> are a few things we could do here, with potential policy drafts for the
> wiki:
>
>  1) Never abandon
>
>  ```
> Our policy is to never abandon changes except for our own.
> ```
>
>  The sentiment here is that an old change in the queue isn't really
> hurting anything by just sitting there, and it is more visible if someone
> else wants to pick up the change.
>
>  2) Manually abandon after N months/weeks changes that have a -2 or were
> fixed in a different patch
>
>  ```
>  If a change is submitted and given a -1, and subsequently the author
> becomes unresponsive for a few weeks, reviewers should leave reminder
> comments on the review or attempt to contact the original author via IRC or
> email. If the change is easy to fix, anyone should feel welcome to check
> out the change and resubmit it using the same change ID to preserve
> original authorship. Core reviewers will not abandon such a change.
>
>  If a change is submitted and given a -2, or it otherwise becomes clear
> that the change can not make it in (for example, if an alternate change was
> chosen to solve the problem), and the author has been unresponsive for at
> least 3 months, a core reviewer should abandon the change.
>  ```
>
>  Core reviewers can click the abandon button only on old patches that are
> definitely never going to make it in. This approach has the advantage that
> it is easier for contributors to find changes and fix them up, even if the
> change is very old.
>
>  3) Manually abandon after N months/weeks changes that have a -1 that was
> never responded to
>
>  ```
> If a change is submitted and given a -1, and subsequently the author
> becomes unresponsive for a few weeks, reviewers should leave reminder
> comments on the review or attempt to contact the original author via IRC or
> email. If the change is easy to fix, anyone should feel welcome to check
> out the change and resubmit it using the same change ID to preserve
> original authorship. If the author is unresponsive for at least 3 months
> and no one else takes over the patch, core reviewers can abandon the patch,
> leaving a detailed note about how the change can be restored.
>
>  If a change is submitted and given a -2, or it otherwise becomes clear
> that the change can not make it in (for example, if an alternate change was
> chosen to solve the problem), and the author has been unresponsive for at
> least 3 months, a core reviewer should abandon the change.
> ```
>
>  Core reviewers can click the abandon button on changes that no one has
> shown an interest in in N months/weeks, leaving a message about how to
> restore the change if the author wants to come back to it. Puppet Labs does
> this for its module pull requests, setting N at 1 month.
>
>  4) Auto-abandon after N months/weeks if patch has a -1 or -2
>
>  ```
> If a change is given a -2 and the author has been unresponsive for at
> least 3 months, a script will automatically abandon the change, leaving a
> message about how the author can restore the change and attempt to resolve
> the -2 with the reviewer who left it.
> ```
>
>  We would use a tool like this one[1] to automatically abandon changes
> meeting a certain criteria. We would have to decide whether we want to only
> auto-abandon changes with -2's or go as far as to auto-abandon those with
> -1's. The policy proposal above assumes -2. The tool would leave a canned
> message about how to restore the change.
>
>
>  Option 1 has the problem of leaving clutter around, which the discussion
> today seeks to solve.
>
>  Option 3 leaves the possibility that a change that is mostly good
> becomes abandoned, making it harder for someone to find and restore it.
>
>   I don't think option 4 is necessary because there are not an
> overwhelming number of old changes (I count 9 that are currently over six
> months old). In working through old changes a few months ago I found that
> many of them are easy to fix up to remove a -1, and auto-abandoning removes
> the ability for a human to make that call. Moreover, if a patch has a
> procedural -2 that ought to be lifted after some point, auto-abandonment
> has the potential to accidentally throw out a change that was int

Re: [openstack-dev] [neutron] Regarding Flow classifiers existing proposals

2015-06-05 Thread Vikram Choudhary
Thanks Miguel!

From: Miguel Angel Ajo [mailto:mangel...@redhat.com]
Sent: 05 June 2015 14:12
To: Vikram Choudhary
Cc: azama-y...@mxe.nes.nec.co.jp; Henry Fourie; Cathy Zhang; arma...@gmail.com; 
Dongfeng (C); Kyle Mestery; openstack-dev@lists.openstack.org; Dhruv Dhody; 
Kalyankumar Asangi
Subject: [neutron] Regarding Flow classifiers existing proposals



Added openstack-dev, where I believe this conversation must live.

I totally agree on this, thank you for bringing up this conversation. This is 
not something we want to do for QoS this cycle, but probably next cycle.

Anyway, an unified data model and API to create/update classifiers will not 
only be beneficial from the code duplication point of view, but will also 
provide a better user experience.

I’m all for it.

Best regards,
Miguel Ángel Ajo


On Friday 5 June 2015 at 09:57, Vikram Choudhary wrote:

Dear All,



There are multiple proposal floating around flow classifier rules for Liberty 
[1], [2] and [3].

I feel we all should work together and try to address all our use case having a 
unified framework rather than working separately achieving the same  goal.



Moreover, I can find the proposal for flow classifier as defined by the 
existing SFC [2] proposal is too generic and could address all the use cases by 
minor extension’s.



In this regard, I would like all to come forward, exchange their thoughts, work 
together and make it happen good the first go rather doing the same effort 
separately and end up in duplicating code & effort ☹.

I always feel less code will make our life happy in the long run ;)



Please let me know about your views.



[1] Add Neutron API extensions for packet forwarding

  https://review.openstack.org/#/c/186663/



[2] Neutron API for Service Chaining [Flow Filter resource]

  
https://review.openstack.org/#/c/177946/6/specs/liberty/neutron-api-for-service-chaining.rst



[3] QoS API Extension [Defines classifier rule in QoSRule. Classifier rule can 
really grow big in the long run]:

  
https://review.openstack.org/#/c/88599/10/specs/liberty/qos-api-extension.rst



Thanks

Vikram

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Regarding Flow Classifier proposals for Liberty!

2015-06-05 Thread Vikram Choudhary
Dear All,



There are multiple proposal floating around flow classifier rules for Liberty 
[1], [2] and [3].

I feel we all should work together and try to address all our use case having a 
unified framework rather than working separately achieving the same  goal.



Moreover, I can find the proposal for flow classifier as defined by the 
existing SFC [2] proposal is too generic and could address all the use cases by 
minor extension's.



In this regard, I would like all to come forward, exchange their thoughts, work 
together and make it happen good the first go rather doing the same effort 
separately and end up in duplicating code & effort :(.

I always feel less code will make our life happy in the long run ;)



Please let me know about your views.



[1] Add Neutron API extensions for packet forwarding

  https://review.openstack.org/#/c/186663/



[2] Neutron API for Service Chaining [Flow Filter resource]

  
https://review.openstack.org/#/c/177946/6/specs/liberty/neutron-api-for-service-chaining.rst



[3] QoS API Extension [Defines classifier rule in QoSRule. Classifier rule can 
really grow big in the long run]:

  
https://review.openstack.org/#/c/88599/10/specs/liberty/qos-api-extension.rst



Thanks

Vikram

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Regarding Flow classifiers existing proposals

2015-06-05 Thread Miguel Angel Ajo


Added openstack-dev, where I believe this conversation must live.

I totally agree on this, thank you for bringing up this conversation. This is 
not something we want to do for QoS this cycle, but probably next cycle.

Anyway, an unified data model and API to create/update classifiers will not 
only be beneficial from the code duplication point of view, but will also 
provide a better user experience.

I’m all for it.

Best regards,
Miguel Ángel Ajo



On Friday 5 June 2015 at 09:57, Vikram Choudhary wrote:

> Dear All,
>   
> There are multiple proposal floating around flow classifier rules for Liberty 
> [1], [2] and [3].
> I feel we all should work together and try to address all our use case having 
> a unified framework rather than working separately achieving the same  goal.
>   
> Moreover, I can find the proposal for flow classifier as defined by the 
> existing SFC [2] proposal is too generic and could address all the use cases 
> by minor extension’s.
>   
> In this regard, I would like all to come forward, exchange their thoughts, 
> work together and make it happen good the first go rather doing the same 
> effort separately and end up in duplicating code & effort L.
> I always feel less code will make our life happy in the long run ;)
>   
> Please let me know about your views.
>   
> [1] Add Neutron API extensions for packet forwarding
>   https://review.openstack.org/#/c/186663/
>   
> [2] Neutron API for Service Chaining [Flow Filter resource]
>   
> https://review.openstack.org/#/c/177946/6/specs/liberty/neutron-api-for-service-chaining.rst
>   
> [3] QoS API Extension [Defines classifier rule in QoSRule. Classifier rule 
> can really grow big in the long run]:
>   
> https://review.openstack.org/#/c/88599/10/specs/liberty/qos-api-extension.rst
>   
> Thanks
> Vikram
>  
>  
>  
>  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][trove][neutron][octavia] Protected openstack resources

2015-06-05 Thread John Garbutt
Hi,

I still think we need to look at lot more carefully at why using an
isolated "service" tenant would not work.

Sure, thats a bit rich coming from someone trying to limit the scope
of Nova, but really I am just trying to work out what the problem is
you are tying to solve, and specifically what problems you have using
a separate tenant to the user.

On 4 June 2015 at 20:44, Eichberger, German  wrote:
> Amrith,
>
> Thanks for spearheading that work. In the Octavia project we are
> interested in the shadow tenant to solve some of the scalability issues we
> have encountered with one service tenant:
>
> * There is probably a limit how many Vms a tenant can have

-1

There is a quota that will need updating for that tenant, but thats
fine. Also, the list instance API call will be pages, and you have to
deal with that. But I don't think there is a hard limit on that. If we
find one, lets try fix it.

> * We have been running out of ipsec rules in our tenant

Do you mean run out of the default quota, or hit a hard technical limit?

> * There is a limit how many ports a tenant can have (somebody mentioned
> 200 to me)

I would hope thats also an adjustable quota?
Or does this relate to a specific technology choice you have made?

> A lot of that we still have to validate but I think for various reason
> sharding over multiple tenants and networks is interesting to us.

Thats a nice twist, if we do hit a hard limit somewhere.

Thanks,
John

> On 6/4/15, 6:45 AM, "Doug Hellmann"  wrote:
>
>>Excerpts from Amrith Kumar's message of 2015-06-04 12:46:37 +:
>>> John,
>>>
>>> Thanks for your note. I've updated the review at
>>>https://review.openstack.org/#/c/186357/ with answers to some of your
>>>questions (and I added you to that review).
>>>
>>> Trove's use-case like some of the other projects listed is different
>>>from Glance in that Trove has a guest agent. I've tried to explain that
>>>in more detail in patch set 5. I'd appreciate your comments.
>>
>>We solved this in Akanda by placing the service VMs in a special
>>tenant, isolating them with security group rules, and then giving
>>the agent running in the VM a REST API connected to a private
>>management network owned by the same tenant that owns the VM. All
>>communication with the agent starts from a service on the outside,
>>through that management network. The VMs act as routers, so they
>>are also attached to the cloud-user's networks, but the agent doesn't
>>respond on those networks.
>>
>>Doug
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion guideline in API-WG

2015-06-05 Thread Xu, Hejie
Hi, Jay, I would say follow what happened on this guideline 
https://review.openstack.org/#/c/187112 :)

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: Thursday, June 4, 2015 5:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion 
guideline in API-WG

Hi Alex,

Based on my understanding, the Mangum code base is get from Ironic, that's why 
Magnum using http headers because when Magnum was created, Ironic is also using 
http headers.
Perhaps Magnum can follow the way how Ironic move to use Microversion?
Thanks.


2015-06-04 14:58 GMT+08:00 Xu, Hejie 
mailto:hejie...@intel.com>>:
Hi, guys,

I’m working on adding Microversion into the API-WG’s guideline which make sure 
we have consistent Microversion behavior in the API for user.
The Nova and Ironic already have Microversion implementation, and as I know 
Magnum https://review.openstack.org/#/c/184975/ is going to implement 
Microversion also.

Hope all the projects which support( or plan to) Microversion can join the 
review of guideline.

The Mircoversion specification(this almost copy from nova-specs): 
https://review.openstack.org/#/c/187112
And another guideline for when we should bump Mircoversion 
https://review.openstack.org/#/c/187896/

As I know, there already have a little different between Nova and Ironic’s 
implementation. Ironic return min/max version when the requested
version doesn’t support in server by http-headers. There isn’t such thing in 
nova. But that is something for version negotiation we need for nova also.
Sean have pointed out we should use response body instead of http headers, the 
body can includes error message. Really hope ironic team can take a
look at if you guys have compelling reason for using http headers.

And if we think return body instead of http headers, we probably need think 
about back-compatible also. Because Microversion itself isn’t versioned.
So I think we should keep those header for a while, does make sense?

Hope we have good guideline for Microversion, because we only can change 
Mircoversion itself by back-compatible way.

Thanks
Alex Xu


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,
Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] How to deal with aborted image read?

2015-06-05 Thread Flavio Percoco

On 04/06/15 11:46 -0600, Chris Friesen wrote:

On 06/04/2015 03:01 AM, Flavio Percoco wrote:

On 03/06/15 16:46 -0600, Chris Friesen wrote:

We recently ran into an issue where nova couldn't write an image file due to
lack of space and so just quit reading from glance.

This caused glance to be stuck with an open file descriptor, which meant that
the image consumed space even after it was deleted.

I have a crude fix for nova at "https://review.openstack.org/#/c/188179/";
which basically continues to read the image even though it can't write it.
That seems less than ideal for large images though.

Is there a better way to do this?  Is there a way for nova to indicate to
glance that it's no longer interested in that image and glance can close the
file?

If I've followed this correctly, on the glance side I think the code in
question is ultimately glance_store._drivers.filesystem.ChunkedFile.__iter__().


Actually, to be honest, I was quite confused by the email :P

Correct me if I still didn't understand what you're asking.

You ran out of space on the Nova side while downloading the image and
there's a file descriptor leak somewhere either in that lovely (sarcasm)
glance wrapper or in glanceclient.


The first part is correct, but the file descriptor is actually held by 
glance-api.


Just by reading your email and glancing your patch, I believe the bug
might be in glanceclient but I'd need to five into this. The piece of
code you'll need to look into is[0].

glance_store is just used server side. If that's what you meant -
glance is keeping the request and the ChunkedFile around - then yes,
glance_store is the place to look into.

[0]
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v1/images.py#L152


I believe what's happening is that the ChunkedFile code opens the file 
and creates the iterator.  Nova then starts iterating through the 
file.


If nova (or any other user of glance) iterates all the way through the 
file then the ChunkedFile code will hit the "finally" clause in 
__iter__() and close the file descriptor.


If nova starts iterating through the file and then stops (due to 
running out of room, for example), the ChunkedFile.__iter__() routine 
is left with an open file descriptor.  At this point deleting the 
image will not actually free up any space.


I'm not a glance guy so I could be wrong about the code.  The 
externally-visible data are:

1) glance-api is holding an open file descriptor to a deleted image file
2) If I kill glance-api the disk space is freed up.
3) If I modify nova to always finish iterating through the file the 
problem doesn't occur in the first place.


Gotcha, thanks for explaining. I think the problem is that there might
be a reference leak and therefore the FD is kept opened. Probably the
request interruption is not getting to the driver. I've filed this
bug[0] so we can look into it.

[0] https://bugs.launchpad.net/glance-store/+bug/1462235

Flavio

--
@flaper87
Flavio Percoco


pgpqbO1TwobzI.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] When to bump the microversion?

2015-06-05 Thread Dmitry Tantsur

On 06/04/2015 05:27 PM, Lucas Alvares Gomes wrote:

Hi Ruby,

Thanks for starting this thread, just like you I've been always
confused about when and when not bump the microversioning of the API.


Backwards compatible API adds with no user signaling is a fallacy
because it assumes the arrow of time flows only one way.

If at version 1.5 you have a resource that is

foo {
   "bar": ...
}

And then you decide you want to add another attribute

foo {
   "bar": ...
   "baz": ...
}

And you don't bump the version, you'll get a set of users that use a
cloud with baz, and incorrectly assume that version 1.5 of the API means
that baz will always be there. Except, there are lots of clouds out
there, including ones that might be at the code commit before it was
added. Because there are lots of deploys in the world, your users can
effectively go back in time.

So now your API definition for version 1.5 is:

"foo, may or may not contain baz, and there is no way of you knowing if
it will until you try. good luck."

Which is pretty aweful.



Oh, that's a good point, I can see the value on that.

Perhaps the guide should define bumping the micro version something
along these words: "Whenever a change is made to the API which is
visible to the client the micro version should be incremented" ?


Ok, one more last thought on this topic: definition of visible change 
can go the wrong way even faster. E.g. remember that our JSON fields 
(instance_info, driver_info and even driver_internal_info) are part of 
API. Which means that we have feature-gate drivers :) and feature like 
cleaning as well (which we actually did via configuration option, 
despite everything said in this thread).




This is powerful because gives the clients a fine grained way to
detect what are the API features available.

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Standard way to indicate a partial blueprint implementation?

2015-06-05 Thread Flavio Percoco

On 04/06/15 19:23 -0400, Zane Bitter wrote:
Ever since we established[1] a format for including metadata about 
bugs in Git commit messages that included a 'Partial-Bug' tag, people 
have been looking for a way to do the equivalent for partial blueprint 
implementations. A non-exhaustive search of a small number of projects 
reveals at least the following formats in use:


partial blueprint: x
Implements: partial blueprint x
Implements: partial-blueprint x
Partial-Blueprint: x
part of blueprint x
partial blueprint x
Implements: blueprint x (partial)
Partially implements: blueprint x
Partially Implements: blueprint x
Partially-Implements: blueprint x
Partial implements blueprint x
Partially-Implements-Blueprint: x
Part of blueprint x
Partial-implements: blueprint x
Partial-Implements: blueprint x
partially implement: blueprint x

No guidance is available on the wiki page.[2] Clearly the regex 
doesn't care so long as it sees the word blueprint followed by the 
blueprint name. I have personally been using:


 Implements: partial-blueprint x

but I don't actually care much. I would also be fine with:

 Partially-Implements: blueprint x


This is the one that we've been using in Zaqar.



I do think it should have a colon so that it fits the format of the 
rest of the sign-off stanza, I don't think it should have any spaces 
before the colon for the same reason, and ideally it would have 
'Implements' in there somewhere for consistency.


Sahara folks have documented a standard,[3] but it fails on some of 
those criteria and in any event they haven't actually been following 
it.


Can we agree on and document a standard way of doing this?

Yes, someone -1'd my patch for this. How did you guess?

cheers,
Zane.

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-August/012945.html
[2] https://wiki.openstack.org/wiki/GitCommitMessages
[3] https://wiki.openstack.org/wiki/Sahara/GitCommits

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgp9VgWk781Kg.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Service Chain project IRC meeting minutes - 06/04/2015

2015-06-05 Thread Vikram Choudhary
Armando, yes I will be using the latest version for heading up the work.

From: Armando M. [mailto:arma...@gmail.com]
Sent: 05 June 2015 08:57
To: OpenStack Development Mailing List (not for usage questions)
Cc: Kalyankumar Asangi; Ramanjaneya palleti
Subject: Re: [openstack-dev] [neutron] Service Chain project IRC meeting 
minutes - 06/04/2015



On 4 June 2015 at 19:32, Vikram Choudhary 
mailto:viks...@gmail.com>> wrote:
Hi Cathy,

Thanks for heading up this meeting. No worries about the timing, time zones are 
really difficult to handle ;)

I do agree with Armando that finalization of the API is important and must be 
done at the earliest. As discussed over the last meeting I will start working 
on this and hope by the next meeting we have something in the kitty.

Are you going to use a polished up version of spec [1] (which needs a rebase 
and a transition to the networking-sfc repo when that's ready)?

[1] https://review.openstack.org/#/c/177946/


Thanks
Vikram


On Fri, Jun 5, 2015 at 6:36 AM, Armando M. 
mailto:arma...@gmail.com>> wrote:

On 4 June 2015 at 14:17, Cathy Zhang 
mailto:cathy.h.zh...@huawei.com>> wrote:
Thanks for joining the service chaining meeting today! Sorry for the time 
confusion. We will correct the weekly meeting time to 1700UTC (10am pacific 
time) Thursday #openstack-meeting-4 on the OpenStack meeting page.


Cathy, thanks for driving this. I took the liberty to carry out one of the 
actions identified in the meeting: the creation of repo to help folks 
collaborate over code/documentation/testing etc [1]. As for the core team 
definition, we'll start with a single member who can add new folks as more 
docs/code gets poured in.

One question I had when looking at the minutes, was regarding slides [2]. Not 
sure if discussing deployment architectures when the API is still baking is 
premature, but I wonder if you had given some thoughts into having a pure 
agentless architecture even for the OVS path.

Having said that, as soon as the repo is up and running, I'd suggest to move 
any relevant document (e.g. API proposal, use cases, etc) over to the repo and 
reboot the review process so that everyone can be on the same page.

Cheers,
Armando

[1] https://review.openstack.org/#/c/188637/
[2] 
https://docs.google.com/presentation/d/1SpVyLBCMRFBpMh7BsHmpENbSY6qh1s5NRsAS68ykd_0/edit#slide=id.p

Meeting Minutes:
http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.html
Meeting Minutes (text): 
http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.txt
Meeting Log:
http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.log.html

The next meeting is scheduled for June 11 (same place and time).

Thanks,
Cathy


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev