Re: [openstack-dev] [Oslo] Improving deprecated options identification and documentation

2016-01-28 Thread Kuvaja, Erno
> From: Ronald Bradford [mailto:m...@ronaldbradford.com] 
> Sent: Wednesday, January 20, 2016 6:34 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Oslo] Improving deprecated options 
> identification and documentation
>
> Markus,
>
> 
>> Yes, in what release it is to be removed, e.g. Mitaka.  So when is
>> that release cycle, i.e. now once removed there is no record.
>
> The information at which point in time a removal will happen can be
> derived from a combination of:
> * the "Deprecation Notes" (e.g. Nova's at [1]) and
> * the "follows_standard_deprecation" policy [2].
> I don't see the immediate need to duplicate that information.
>
> The potential duplication you refer to enables code scanning/automation to 
> detect and even initiate steps at the start of a release cycle to remove 
> deprecated options.
> Looking at documented notes is a more inconsistent manual approach. The 
> number of deprecated options should not be high, I do not see the issue in 
> ensuring this information is in code as well as docs.

Specially thinking about libraries and projects that releases multiple times 
per cycle something like "Will be removed in the first release after 17.9.2016" 
rather than "Will be removed at X.Y.Z" would be preferred. This way we can 
ensure the correctness and proper deprecation time without needing to care what 
releases that project happens to do in between. It's not exactly the easiest 
job to predict all the changes going for months ahead so that specific release 
can be identified when the deprecation happens.

Apart from that I'm happily behind the proposal of documenting the deprecations 
better.

- Erno
>
>
> I agree that there should be an explanation in ``deprecation_reason``
> if ``deprecated_for_removal=True`` **why** we deprecated it and which
> follow up actions seem to be reasonable for the ops.
>
> Thanks!  I think for now, stating a reason, stating what release it was 
> deprecated and what release it should be removed provides a starting point 
> with a low barrier of entry to see results.
>
> Ronald (rbradfor)
>
> 
> References:
> [1] Nova's current release notes based on "reno"; "Deprecation Notes":
>
> http://docs.openstack.org/releasenotes/nova/unreleased.html#deprecation-notes
> [2] OpenStack governance docs; tag "assert_follows_standard_deprecation":
>
> https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html
>
> Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][cinder][neutron]How to make use of x-openstack-request-id

2016-01-27 Thread Kuvaja, Erno
> -Original Message-
> From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> Sent: Wednesday, January 27, 2016 9:56 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova][glance][cinder][neutron]How to make
> use of x-openstack-request-id
> 
> 
> 
> On 1/27/2016 9:40 AM, Tan, Lin wrote:
> > Thank you so much. Eron. This really helps me a lot!!
> >
> > Tan
> >
> > *From:*Kuvaja, Erno [mailto:kuv...@hpe.com]
> > *Sent:* Tuesday, January 26, 2016 8:34 PM
> > *To:* OpenStack Development Mailing List (not for usage questions)
> > *Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
> > make use of x-openstack-request-id
> >
> > Hi Tan,
> >
> > While the cross project spec was discussed Glance already had
> > implementation of request ids in place. At the time of the Glance
> > implementation we assumed that one request id is desired through the
> > chain of services and we implemented the req id to be accepted as part
> > of the request. This was mainly driven to have same request id through
> > the chain between glance-api and glance-registry but as the same code
> > was used in both api and registry services we got this functionality
> > across glance.
> >
> > The cross project discussion turned this approach down and decided
> > that only new req id will be returned. We did not want to utilize 2
> > different code bases to handle req ids in glance-api and
> > glance-registry, nor we wanted to remove the functionality to allow
> > the req ids being passed to the service as that was already merged to
> > our API. Thus is requests are passed without req id defined to the
> > services they behave (apart from nova having different header name)
> > same way, but with glance the request maker has the liberty to specify
> > request id they want to use (within configured length limits).
> >
> > Hopefully that clarifies it for you.
> >
> > -Erno
> >
> > *From:*Tan, Lin [mailto:lin@intel.com]
> > *Sent:* 26 January 2016 01:26
> > *To:* OpenStack Development Mailing List (not for usage questions)
> > *Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
> > make use of x-openstack-request-id
> >
> > Thanks Kebane, I test glance/neutron/keystone with
> > ``x-openstack-request-id`` and find something interesting.
> >
> > I am able to pass ``x-openstack-request-id``  to glance and it will
> > use the UUID as its request-id. But it failed with neutron and keystone.
> >
> > Here is my test:
> >
> > http://paste.openstack.org/show/484644/
> >
> > It looks like because keystone and neutron are using
> > oslo_middleware:RequestId.factory and in this part:
> >
> >
> https://github.com/openstack/oslo.middleware/blob/master/oslo_middlew
> a
> > re/request_id.py#L35
> >
> > It will always generate an UUID and append to response as
> > ``x-openstack-request-id`` header.
> >
> > My question is should we accept an external passed request-id as the
> > project's own request-id or having its unique request-id?
> >
> > In other words, which one is correct way, glance or neutron/keystone?
> > There must be something wrong with one of them.
> >
> > Thanks
> >
> > B.R
> >
> > Tan
> >
> > *From:*Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
> > *Sent:* Wednesday, December 2, 2015 2:24 PM
> > *To:* OpenStack Development Mailing List
> > (openstack-dev@lists.openstack.org
> > <mailto:openstack-dev@lists.openstack.org>)
> > *Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
> > make use of x-openstack-request-id
> >
> > Hi Tan,
> >
> > Most of the OpenStack RESTful API returns `X-Openstack-Request-Id` in
> > the API response header but thisrequest id isnotavailable to the
> > callerfromthe python client.
> >
> > When you use --debug option from command from the command prompt
> using
> > client, you can see `X-Openstack-Request-Id`on the console but it is
> > not logged anywhere.
> >
> > Currently a cross-project specs [1] is submitted and approved for
> > returning X-Openstack-Request-Id to the caller and the implementation
> > for the same is in progress.
> >
> > Please go through the specs for detail information which will help you
> > to understand more about request-ids and current work about the same.
> >
> > Please feel free to revert back anytime for your doubts.
> >
> > [1]
> > https://github.com/

Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-27 Thread Kuvaja, Erno
> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: Monday, January 25, 2016 3:07 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on 
> the
> idea to move it forward
> 
> On 20/01/16 13:23 -0430, Flavio Percoco wrote:
> >Thoughts? Feedback?
> 
> Hey Folks,
> 
> Thanks a lot for the feedback. Great comments and proposals in the many
> replies.
> I've gone through the whole thread and collected the most common
> feedback.
> Here's the summary:
> 
> - The general idea of planning some sort of stabilization for a project is 
> good
>   but considering a cycle for it is terrible. It'd be easier if development
>   cycles would be shorter but the 6-month based development cycles don't
> allow
>   for planning this properly.
> 
> - Therefore, milestones are more likely to be good for this but there has to
> be
>   a good plan. What will happen with on-going features? How does a project
>   decide what to merge or not? Is it really going to help with reviews/bugs
>   backlog? or would this just increase the bakclog?
> 
> - We shouldn't need any governance resolution/reference for this. Perhaps a
>   chapter/section on the project team guide should do it.
> 
> - As other changes in the commuity, it'd be awesome to get feedback from a
>   project doing this before we start encouraging other projects to do the
> same.
> 
> 
> I'll work on adding something to the project team guide that covers the
> above points.
> 
> did I miss something? Anything else that we should add and or consider?
> 

Sorry for jumping the gun this late, but I have been thinking about this since 
your first e-mail and one thing bothers me. Don't we have stabilization cycle 
for each release starting right from the release?

In my understanding this is exactly what the Stable releases Support Phase I is 
accepting bug fixes but no new features. After 6 months the release is moved to 
Phase II where only critical and security fixes are accepted; I think this is 
good example of stabilization cycle and the output is considered solid.

All concerns looked at I think the big problem really is to get the people 
working on these cycles. Perhaps we should encourage more active maintenance on 
our stable branches and see then what we can bring from that to our development 
branch expertise and knowledge wise.

While I'm not huge believer of constant agile development, this is one of those 
things that needs to be lived with and I think stable branches are our best bet 
for stabilization work (specifically when that work needs to land to master 
first). For long term refactoring I'd like to see us using more feature 
branches so we can keep doing the work without releasing it before it's done.

My 2 Euro cents,
Erno

> Cheers,
> Flavio
> 
> --
> @flaper87
> Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Glance Core team additions/removals

2016-01-26 Thread Kuvaja, Erno
> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: 26 January 2016 14:42
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [glance] Glance Core team additions/removals
> 
> 
> Greetings,
> 
> I'd like us to have one more core cleanup for this cycle:
> 
> Additions:
> 
> - Kairat Kushaev
> - Brian Rosmaita
> 
> Both have done amazing reviews either on specs or code and I think they
> both would be an awesome addition to the Glance team.

+2/+2 for adding them both to Glance _core_reviewer_ team, they are very much 
part of Glance team already ;)

> 
> Removals:
> 
> - Alexander Tivelkov
> - Fei Long Wang
> 
> Fei Long and Alexander are both part of the OpenStack community.
> However, their focus and time has shifted from Glance and, as it stands right
> now, it would make sense to have them both removed from the core team.
> This is not related to their reviews per-se but just prioritization. I'd like 
> to
> thank both, Alexander and Fei Long, for their amazing contributions to the
> team. If you guys want to come back to Glance, please, do ask. I'm sure the
> team will be happy to have you on board again.

This is definitely our loss and I would be happy seeing them back in our first 
line if the focus changes.

- Erno

>
> To all other members of the community. Please, provide your feedback.
> Unless someone objects, the above will be effective next Tuesday.
> 
> Cheers,
> Flavio
> 
> --
> @flaper87
> Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][cinder][neutron]How to make use of x-openstack-request-id

2016-01-26 Thread Kuvaja, Erno
Hi Tan,

While the cross project spec was discussed Glance already had implementation of 
request ids in place. At the time of the Glance implementation we assumed that 
one request id is desired through the chain of services and we implemented the 
req id to be accepted as part of the request. This was mainly driven to have 
same request id through the chain between glance-api and glance-registry but as 
the same code was used in both api and registry services we got this 
functionality across glance.

The cross project discussion turned this approach down and decided that only 
new req id will be returned. We did not want to utilize 2 different code bases 
to handle req ids in glance-api and glance-registry, nor we wanted to remove 
the functionality to allow the req ids being passed to the service as that was 
already merged to our API. Thus is requests are passed without req id defined 
to the services they behave (apart from nova having different header name) same 
way, but with glance the request maker has the liberty to specify request id 
they want to use (within configured length limits).

Hopefully that clarifies it for you.


-  Erno

From: Tan, Lin [mailto:lin@intel.com]
Sent: 26 January 2016 01:26
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][glance][cinder][neutron]How to make use of 
x-openstack-request-id

Thanks Kebane, I test glance/neutron/keystone with ``x-openstack-request-id`` 
and find something interesting.

I am able to pass ``x-openstack-request-id``  to glance and it will use the 
UUID as its request-id. But it failed with neutron and keystone.
Here is my test:
http://paste.openstack.org/show/484644/

It looks like because keystone and neutron are using 
oslo_middleware:RequestId.factory and in this part:
https://github.com/openstack/oslo.middleware/blob/master/oslo_middleware/request_id.py#L35
It will always generate an UUID and append to response as 
``x-openstack-request-id`` header.

My question is should we accept an external passed request-id as the project's 
own request-id or having its unique request-id?
In other words, which one is correct way, glance or neutron/keystone? There 
must be something wrong with one of them.

Thanks

B.R

Tan


From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
Sent: Wednesday, December 2, 2015 2:24 PM
To: OpenStack Development Mailing List 
(openstack-dev@lists.openstack.org)
Subject: Re: [openstack-dev] [nova][glance][cinder][neutron]How to make use of 
x-openstack-request-id


Hi Tan,



Most of the OpenStack RESTful API returns `X-Openstack-Request-Id` in the API 
response header but this request id is not available to the caller from the 
python client.

When you use --debug option from command from the command prompt using client, 
you can see `X-Openstack-Request-Id` on the console but it is not logged 
anywhere.



Currently a cross-project specs [1] is submitted and approved for returning 
X-Openstack-Request-Id to the caller and the implementation for the same is in 
progress.

Please go through the specs for detail information which will help you to 
understand more about request-ids and current work about the same.



Please feel free to revert back anytime for your doubts.



[1] 
https://github.com/openstack/openstack-specs/blob/master/specs/return-request-id.rst



Thanks,



Abhishek Kekane









Hi guys

I recently play around with 'x-openstack-request-id' header but have a 
dump question about how it works. At beginning, I thought an action across 
different services should use a same request-id but it looks like this is not 
the true.



First I read the spec: 
https://blueprints.launchpad.net/nova/+spec/cross-service-request-id which said 
"This ID and the request ID of the other service will be logged at service 
boundaries". and I see cinder/neutron/glance will attach its context's 
request-id as the value of "x-openstack-request-id" header to its response 
while nova use X-Compute-Request-Id. This is easy to understand. So It looks 
like each service should generate its own request-id and attach to its 
response, that's all.



But then I see glance read 'X-Openstack-Request-ID' to generate the request-id 
while cinder/neutron/nova read 'openstack.request_id' when using with keystone. 
It is try to reuse the request-id from keystone.



This totally confused me. It would be great if you can correct me or point me 
some reference. Thanks a lot



Best Regards,



Tan


__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further 

Re: [openstack-dev] [glance][keystone][artifacts] Service Catalog name for Glance Artifact Repository API

2015-12-14 Thread Kuvaja, Erno
> -Original Message-
> From: Ian Cordasco [mailto:ian.corda...@rackspace.com]
> Sent: Friday, December 11, 2015 8:13 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [glance][keystone][artifacts] Service Catalog
> name for Glance Artifact Repository API
> 
> 
> 
> On 12/11/15, 12:25, "Alexander Tivelkov"  wrote:
> 
> >Hi folks!
> >
> >
> >As it was decided during the Mitaka design summit, we are separating
> >the experimental Artifact Repository API from the main Glance API. This
> >API will have a versioning sequence independent from the main Glance
> >API and will be run as a standalone optional  service, listening on the
> >port different from the standard glance-api port (currently the
> >proposed default is 9393). Meanwhile, it will remain an integral part
> >of the larger Glance project, sharing the database, implementation
> >roadmap, development and review  teams etc.
> 
> +1 to 9494 for DevStack so developers can run Arti and Searchlight along
> side each other.
> 
> >Since this API will be consumed by both end-users and other Openstack
> >services, its endpoint should be discoverable via regular service
> >catalog API. This rises the question: what should be the service name
> >and service type for the appropriate entree in  the service catalog?
> >
> >
> >We've came out with the idea to call the service "glare" (this is our
> >internal codename for the artifacts initiative, being an acronym for
> >"GLance Artifact REpository") and set its type to "artifacts". Other
> >alternatives for the name may be "arti" or "glance_artifacts"
> > and for the type - "assets" or "objects" (the latter may be confusing
> >since swift's type is object-store, so I personally don't like it).
> 
> For the type, I would think either "asset" or "artifact" (along the lines of 
> how
> glance is "image", and neutron is "network"). I tend to lean towards 
> "artifact"
> though.
> 
> As for the "default" (I assume DevStack) name, why not just "glare", the
> description should be "Glance Artifact Service" (which I think is slightly 
> more
> end-user important than the name).

++

- Erno
> 
> >Well... we all know, naming is complicated... anyway, I'll appreciate
> >any feedback on this. Thanks!
> >
> >
> >
> >--
> >
> >Regards,
> >Alexander Tivelkov
> >
> 
> --
> Cheers,
> Ian
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][keystone][artifacts] Service Catalog name for Glance Artifact Repository API

2015-12-14 Thread Kuvaja, Erno
> -Original Message-
> From: McLellan, Steven
> Sent: Friday, December 11, 2015 6:37 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [glance][keystone][artifacts] Service Catalog
> name for Glance Artifact Repository API
> 
> Hi Alex,
> 
> Searchlight uses port 9393 (it also made sense to us when we spun out of
> Glance!), so we would prefer it if there's another one that makes sense.
> Regarding the three hardest things in computer science, searchlight's already
> dealing with cache invalidation so I'll stay out of the naming discussion.
> 
> Thanks!
> 
> Steve

Thanks for the heads up Steve,

Mind to make sure that it gets registered for Searchlight as well. It's not 
listed in config-reference [0] nor iana [1] (seems that at least glance ports 
are not registered in iana either fwiw):

[0] 
http://docs.openstack.org/liberty/config-reference/content/firewalls-default-ports.html
[1] 
http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml

- Erno
> 
> From: Alexander Tivelkov
> >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>  d...@lists.openstack.org>>
> Date: Friday, December 11, 2015 at 11:25 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
>  d...@lists.openstack.org>>
> Subject: [openstack-dev] [glance][keystone][artifacts] Service Catalog name
> for Glance Artifact Repository API
> 
> Hi folks!
> 
> As it was decided during the Mitaka design summit, we are separating the
> experimental Artifact Repository API from the main Glance API. This API will
> have a versioning sequence independent from the main Glance API and will
> be run as a standalone optional service, listening on the port different from
> the standard glance-api port (currently the proposed default is 9393).
> Meanwhile, it will remain an integral part of the larger Glance project, 
> sharing
> the database, implementation roadmap, development and review teams
> etc.
> 
> Since this API will be consumed by both end-users and other Openstack
> services, its endpoint should be discoverable via regular service catalog API.
> This rises the question: what should be the service name and service type for
> the appropriate entree in the service catalog?
> 
> We've came out with the idea to call the service "glare" (this is our internal
> codename for the artifacts initiative, being an acronym for "GLance Artifact
> REpository") and set its type to "artifacts". Other alternatives for the name
> may be "arti" or "glance_artifacts" and for the type - "assets" or "objects"
> (the latter may be confusing since swift's type is object-store, so I 
> personally
> don't like it).
> 
> Well... we all know, naming is complicated... anyway, I'll appreciate any
> feedback on this. Thanks!
> 
> --
> Regards,
> Alexander Tivelkov
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-glanceclient] Return request-id to caller

2015-12-11 Thread Kuvaja, Erno
> -Original Message-
> From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
> Sent: 11 December 2015 09:19
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [python-glanceclient] Return request-id to
> caller
> 
> 
> 
> -Original Message-
> From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
> Sent: 10 December 2015 12:56
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [python-glanceclient] Return request-id to
> caller
> 
> 
> 
> -Original Message-
> From: stuart.mcla...@hp.com [mailto:stuart.mcla...@hp.com]
> Sent: 09 December 2015 23:54
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [python-glanceclient] Return request-id to
> caller
> 
> > Excerpts from Flavio Percoco's message of 2015-12-09 09:09:10 -0430:
> >> On 09/12/15 11:33 +, Kekane, Abhishek wrote:
> >>> Hi Devs,
> >>>
> >>>
> >>>
> >>> We are adding support for returning ?x-openstack-request-id?  to the
> >>> caller as per the design proposed in cross-project specs:
> >>>
> >>> http://specs.openstack.org/openstack/openstack-specs/specs/
> >>> return-request-id.html
> >>>
> >>>
> >>>
> >>> Problem Description:
> >>>
> >>> Cannot add a new property of list type to the warlock.model object.
> >>>
> >>>
> >>>
> >>> How is a model object created:
> >>>
> >>> Let?s take an example of glanceclient.api.v2.images.get() call [1]:
> >>>
> >>>
> >>>
> >>> Here after getting the response we call model() method. This model()
> >>> does the job of creating a warlock.model object(essentially a dict)
> >>> based on the schema given as argument (image schema retrieved from
> >>> glance in this case). Inside
> >>> model() the raw() method simply return the image schema as JSON
> >>> object. The advantage of this warlock.model object over a simple
> >>> dict is that it validates any changes to object based on the rules 
> >>> specified
> in the reference schema.
> >>> The keys of this  model object are available as object properties to
> >>> the caller.
> >>>
> >>>
> >>>
> >>> Underlying reason:
> >>>
> >>> The schema for different sub APIs is returned a bit differently. For
> >>> images, metadef APIs glance.schema.Schema.raw() is used which
> >>> returns a schema containing ?additionalProperties?: {?type?:
> >>> ?string?}. Whereas for members and tasks APIs
> >>> glance.schema.Schema.minimal() is used to return schema object which
> does not contain ?additionalProperties?.
> >>>
> >>>
> >>>
> >>> So we can add extra properties of any type to the model object
> >>> returned from members or tasks API but for images and metadef APIs
> >>> we can only add properties which can be of type string. Also for the
> >>> latter case we depend on the glance configuration to allow additional
> properties.
> >>>
> >>>
> >>>
> >>> As per our analysis we have come up with two approaches for
> >>> resolving this
> >>> issue:
> >>>
> >>>
> >>>
> >>> Approach #1:  Inject request_ids property in the warlock model
> >>> object in glance client
> >>>
> >>> Here we do the following:
> >>>
> >>> 1. Inject the ?request_ids? as additional property into the model
> >>> object (returned from model())
> >>>
> >>> 2. Return the model object which now contains request_ids property
> >>>
> >>>
> >>>
> >>> Limitations:
> >>>
> >>> 1. Because the glance schemas for images and metadef only allows
> >>> additional properties of type string, so even though natural type of
> >>> request_ids should be list we have to make it as a comma separated
> >>> ?string? of request ids as a compromise.
> >>>
> >>> 2. Lot of extra code is needed to wrap objects returned from the
> >>> client API so that the caller can get request ids. For example we
> >>> need to write wrapper classes for dict, list, str, tuple, generator.
> >>>
> >>> 3. Not a good design as we are adding a property which should
> >>> actually be a base property but added as additional property as a
> compromise.
> >>>
> >>> 4. There is a dependency on glance whether to allow
> >>> custom/additional properties or not. [2]
> >>>
> >>>
> >>>
> >>> Approach #2:  Add ?request_ids? property to all schema definitions
> >>> in glance
> >>>
> >>>
> >>>
> >>> Here we add  ?request_ids? property as follows to the various APIs
> (schema):
> >>>
> >>>
> >>>
> >>> ?request_ids?: {
> >>>
> >>>  "type": "array",
> >>>
> >>>  "items": {
> >>>
> >>>"type": "string"
> >>>
> >>>  }
> >>>
> >>> }
> >>>
> >>>
> >>>
> >>> Doing this will make changes in glance client very simple as
> >>> compared to approach#1.
> >>>
> >>> This also looks a better design as it will be consistent.
> >>>
> >>> We simply need to modify the request_ids property in  various API
> >>> calls for example glanceclient.v2.images.get().
> >>>
> >>
> >> Hey Abhishek,
> >>
> >> thanks for working on this.
> >>
> >> To be honest, I'm a bit confused on why the request_id needs to be an
> >> attribute of the image. Isn't it passed as 

Re: [openstack-dev] [stable] Stable team PTL nominations are open

2015-12-09 Thread Kuvaja, Erno
> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: 09 December 2015 08:57
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [stable] Stable team PTL nominations are open
> 
> Thierry Carrez wrote:
> > Thierry Carrez wrote:
> >> The nomination deadline is passed, we have two candidates!
> >>
> >> I'll be setting up the election shortly (with Jeremy's help to
> >> generate election rolls).
> >
> > OK, the election just started. Recent contributors to a stable branch
> > (over the past year) should have received an email with a link to vote.
> > If you haven't and think you should have, please contact me privately.
> >
> > The poll closes on Tuesday, December 8th at 23:59 UTC.
> > Happy voting!
> 
> Election is over[1], let me congratulate Matt Riedemann for his election !
> Thanks to everyone who participated to the vote.
> 
> Now I'll submit the request for spinning off as a separate project team to the
> governance ASAP, and we should be up and running very soon.
> 
> Cheers,
> 
> [1] http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2f5fd6c3837eae2a
> 
> --
> Thierry Carrez (ttx)
> 

Congratulations Matt,

Almost 200 voters, sounds like great start for the new team. 

- Erno

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Add Ian Cordasco back into glance-core

2015-12-08 Thread Kuvaja, Erno
> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: Monday, December 07, 2015 4:36 PM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [glance] Add Ian Cordasco back into glance-core
> 
> Greetings,
> 
> Not long ago, Ian Cordasco, sent an email out stepping down from his core
> roles as he didn't have the time to dedicate to the project team's he was part
> of.
> 
> Ian has contacted me mentioning that he's gotten clearance, and therefore,
> time to dedicate to Glance and other activities around our community (I'll let
> him expand on this and answer questions if there are).
> 
> As it was mentioned in the "goodbye thread" - and because Ian knows
> Glance quite well already, including the processes we follow - I'd like to
> propose a fast-track addition for him to join the team again.
> 
> Please, just like for every other folk volunteering for this role, do provide
> your feedback on this. If no rejections are made, I'll proceed to adding Ian
> back to our core team in a week from now.
> 
> Cheers,
> Flavio
> 
> --
> @flaper87
> Flavio Percoco

+2A

Ian, good to have you back!

- Erno
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-12-04 Thread Kuvaja, Erno
> -Original Message-
> From: Jeremy Stanley [mailto:fu...@yuggoth.org]
> Sent: Wednesday, December 02, 2015 8:34 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno)
> WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.
> 
> [Apologies for the delayed reply, after more than a week without Internet
> access it's taking me more than a week to catch up with everything on the
> mailing lists.]
> 
> On 2015-11-20 10:21:47 + (+), Kuvaja, Erno wrote:
> [...]
> > So we were brainstorming this with Rocky the other night. Would this
> > be possible to do by following:
> >
> > 1) we still tag juno EOL in few days time
> 
> Hopefully by the end of this week, once I finish making sure I'm up to speed
> on everything that's been said while I was out (anything less would be
> irresponsible of me).
> 
> > 2) we do not remove the stable/juno branch
> 
> As pointed out later in this thread by Alan, it's technically possible to use 
> a tag
> instead of a branch name (after all, both are just Git refs in the end), and
> deleting the branch sends a clearer message that there are no new commits
> coming for stable/juno ever again.
> 
> > 3) we run periodic grenade jobs for kilo
> >
> > I'm not that familiar with the grenade job itself so I'm doing couple
> > of assumptions, please correct me if I'm wrong.
> >
> > 1) We could do this with py27 only
> 
> Our Grenade jobs are only using Python 2.7 anyway.
> 
> > 2) We could do this with Ubuntu 1404 only
> 
> That's the only place we run Grenade now that stable/icehouse is EOL (it was
> the last branch for which we supported Ubuntu 12.04).
> 
> > If this is doable would we need anything special for these jobs in
> > infra point of view or can we just schedule these jobs from the pool
> > running our other jobs as well?
> >
> > If so is there still "quiet" slots on the infra utilization so that we
> > would not be needing extra resources poured in for this?
> >
> > Is there something else we would need to consider in QA/infra point of
> > view?
> [...]
> 
> There are no technical Infra-side blockers to changing how we've done this in
> the past and instead continuing to run stable/kilo Grenade jobs for some
> indeterminate period after stable/juno is dead, but it's also not (entirely) 
> up
> to Infra to decide this. I defer to the Grenade maintainers and QA team to
> make this determination, and they seem to be pretty heavily against the
> idea.
> 
> > Big question ref the 2), what can we do if the grenade starts failing?
> > In theory we won't be merging anything to kilo that _should_ cause
> > this and we definitely will not be merging anything to Juno to fix
> > these issues anymore. How much maintenance those grenade jobs
> > themselves needs?
> 
> That's the kicker. As I explained earlier in the thread from which this one
> split, keeping Juno-era DevStack and Tempest and all the bits on which they
> rely working in our CI without being able to make any modifications to them
> is intractable (mainly because of the potential for behavior changes in
> transitive dependencies not under our control):
> 
> http://lists.openstack.org/pipermail/openstack-dev/2015-
> December/081109.html
> 
> > So all in all, is the cost doing above too much to get indicator that
> > tells us when Juno --> Kilo upgrade is not doable anymore?
> 
> Yes. This is how we arrived at the EOL timeline for stable/juno in the first
> place: gauging our ability to keep running things like DevStack and Tempest
> on it. Now is not the time to discuss how we can keep Juno on some
> semblance of life support (that discussion concluded more than a year ago),
> it's time for discussing what we can implement in Mitaka so we have more
> reasonable options for keeping the stable/mitaka branch healthy a year from
> now.
> --
> Jeremy Stanley

Thanks for two detailed reply Jeremy!

I think this (and the one you replied to Rocky) gives enough background in 
compact package for interested parties to start focusing their efforts to the 
direction they seem appropriate. Let's see where we are in a year's time.

- Erno

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] Proposal to add Abhishek to Glance core team

2015-12-01 Thread Kuvaja, Erno
> -Original Message-
> From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
> Sent: Tuesday, December 01, 2015 6:21 AM
> To: OpenStack Development Mailing List (not for usage questions); Kekane,
> Abhishek
> Subject: [openstack-dev] [all] [glance] Proposal to add Abhishek to Glance
> core team
> 
> Hi,
> 
> As the requested (re-voting) on [1] seemed to conflict with the thread title, 
> I
> am __init__ing a new thread for the sake of clarity, closure and ease of vote.
> 
> Please do provide feedback on the proposal by me on this thread [1].
> Other reference links are [2] and [3].
> 
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-
> November/thread.html#80279
> [2]
> http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-10-01-
> 14.01.log.html#l-70
> 
> [3] https://launchpad.net/~abhishek-kekane
> 
> --
> 
> Thanks,
> Nikhil
> 
> 

Knowing that Abhishek has the potential and he is great guy I have to vote -1 
on this with really heavy heart.

While his statistics (apart from disagreement rate) are looking good I do not 
see Abhishek as that active member what community would expect from the core. I 
don't see him participating on the mailing list, irc, design sessions, commits 
nor our meetings.

I specially find it hard to justify as we have had few names raising up who are 
really active from design, through implementation to reviews and while they are 
still growing to the role, they demonstrate so much higher cohesion to the 
community.

- Erno

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Stable team PTL nominations are open

2015-11-26 Thread Kuvaja, Erno
> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: Monday, November 23, 2015 5:11 PM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [stable] Stable team PTL nominations are open
> 
> Hi everyone,
> 
> We discussed setting up a standalone stable maintenance team and as part
> of this effort we'll be organizing PTL elections over the coming weeks.
> 
> We held a preliminary meeting today to converge on team scope and
> election mechanisms. The stable team mission is to:
> 
> * Define and enforce the common stable branch policy
> * Educate and accompany projects as they use stable branches
> * Keep CI working on stable branches
> * Mentoring/growing the stable maintenance team
> * Create and improve stable tooling/automation
> 
> Anyone who successfully contributed a stable branch backport over the last
> year (on any active stable branch) is considered a stable contributor and can
> vote in the Stable PTL election.
> 
> If you're interested, please reply to this thread with your self-nomination
> (and your platform) in the coming week. Deadline for self-nomination is
> 23:59 UTC on Monday, Nov 30. Elections will then be held if needed the week
> after.
> 
> Thanks!
> 
> --
> Thierry Carrez (ttx)
> 

Hi all,

As indicated in [0] I'd like to put myself up for the task.

On my first reply [0] to the conversation to spin off Stable Maint team on it's 
own I said that I would like the team being liberally inclusive and I'm happy 
to see that happening by us counting all backporters as being part of this team.

What I think should be the first list of priorities for the PTL of this team:

* Activate people working on the stable branches. I've had few conversations 
with engineers in different companies saying that they are doing the stable 
work downstream and it would make sense for them to do it upstream instead/as 
well. We need to find way to enable and encourage these people to do the work 
in our stable branches to keep them healthy and up to date.

* With that comes gating. We have no benefit of stable branches if we cannot 
run tests on them and merge those backports. This is currently done by handful 
of people and it's not easy task to ramp up new folks on that work. We need to 
identify and encourage the people, who has the correct mindset for it, to step 
up sharing the workload of those few. Short term that will need even more 
effort from the current group doing the work and we need to ensure to not 
overload them.

* Coordination between the project stable maintenance teams. Everyone should 
not be reinventing the wheel. I don't mean that we should recentralize the 
stable maintenance out from the project specific teams, but we need to 
establish active communication to share best practices, issues seen, etc.

* Stable Branch Policy [1]. Current revision is rather discouraging to bring 
anything that is not absolutely needed to stable branches. I think we need to 
find wording that encourages to backport bug fixes while still make sure that 
the reviewers understand what is sustainable and appropriate to merge.

* Ramping up new projects to stable mindset. Via big tent we have lots of new 
projects coming in, some of them would like to have their own stable branches 
but might not have experience and/or knowledge to do it right. 

* Recognition for the people doing the stable work. We have lots of statistics 
for reviews, commits all the way to e-mails to the mailing list, but we do not 
have anything showing interested parties how they or their interest is doing on 
stable side. While in ideal world statistics wouldn't be the driving factor for 
ones contributions, in real world that is way too often the case. 

* Driving the stable related project tagging reformation.


My background and motivations to run for the position:

* Before OpenStack my near work history is in Enterprise support, consulting 
and training. I have firsthand experience of what the enterprise expectations 
and challenges are. And that's our audience for the stable branches.

* Member of HPE Public Cloud engineering. We do run old code.

* Member of HPE Helion OpenStack engineering. We package and distribute stable 
releases.

* Glance Stable Liaison for past year[2]. Freezer Stable Liaison, bringing new 
team up to speed with stable branching in OpenStack.

* Part of the Glance Release team for past cycle driving python-glanceclient 
and glance_store releases.

* I do have the time commitment from my management to work on improving 
upstream stable branches and processes.

I'm not part of stable-maint-core nor I belong to the group of gate fixers 
mentioned earlier. I do believe that I can enable that group to work at their 
best, and limit the overhead of the other areas on that priority list towards 
them; I do believe that I can improve the communication between the project 
teams and activate people to care more about their stable branches; and I do 
know that 

Re: [openstack-dev] [all][glance] Add Sabari Kumar Murugesan <smuruge...@vmware.com>

2015-11-25 Thread Kuvaja, Erno
> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: Monday, November 23, 2015 8:21 PM
> To: openstack-dev@lists.openstack.org
> Cc: Sabari Kumar Murugesan
> Subject: [openstack-dev] [all][glance] Add Sabari Kumar Murugesan
> 
> 
> Greetings,
> 
> I'd like to propose adding Sabari Kumar Murugesan to the glance-core team.
> Sabari has been contributing for quite a bit to the project with great reviews
> and he's also been providing great feedback in matters related to the design
> of the service, libraries and other areas of the team.
> 
> I believe he'd be a great addition to the glance-core team as he has
> demonstrated a good knowledge of the code, service and project's priorities.
> 
> If Sabari accepts to join and there are no objections from other members of
> the community, I'll proceed to add Sabari to the team in a week from now.
> 
> Thanks,
> Flavio
> 
> --
> @flaper87
> Flavio Percoco

+2
- Erno
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-20 Thread Kuvaja, Erno
> -Original Message-
> From: Alan Pevec [mailto:ape...@gmail.com]
> Sent: Friday, November 20, 2015 10:46 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno)
> WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.
> 
> > So we were brainstorming this with Rocky the other night. Would this be
> possible to do by following:
> > 1) we still tag juno EOL in few days time
> > 2) we do not remove the stable/juno branch
> 
> Why not?
> 
> > 3) we run periodic grenade jobs for kilo
> 
> From a quick look, grenade should work with a juno-eol tag instead of
> stable/juno, it's just a git reference.
> "Zombie" Juno->Kilo grenade job would need to set
> BASE_DEVSTACK_BRANCH=juno-eol and for devstack all
> $PROJECT_BRANCH=juno-eol (or 2014.2.4 should be the same commit).
> Maybe I'm missing some corner case in devstack where stable/* is assumed
> but if so that should be fixed anyway.
> Leaving branch around is a bad message, it implies there support for it, while
> there is not.
> 
> Cheers,
> Alan

That sounds like an easy compromise.

- Erno
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-20 Thread Kuvaja, Erno
> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: Friday, November 20, 2015 10:45 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno)
> WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.
> 
> Kuvaja, Erno wrote:
> > So we were brainstorming this with Rocky the other night. Would this be
> possible to do by following:
> > 1) we still tag juno EOL in few days time
> > 2) we do not remove the stable/juno branch
> > 3) we run periodic grenade jobs for kilo
> >
> > I'm not that familiar with the grenade job itself so I'm doing couple of
> assumptions, please correct me if I'm wrong.
> > 1) We could do this with py27 only
> > 2) We could do this with Ubuntu 1404 only
> >
> > If this is doable would we need anything special for these jobs in infra 
> > point
> of view or can we just schedule these jobs from the pool running our other
> jobs as well?
> > If so is there still "quiet" slots on the infra utilization so that we 
> > would not
> be needing extra resources poured in for this?
> > Is there something else we would need to consider in QA/infra point of
> view?
> >
> > Benefits for this approach:
> > 1) The upgrade to kilo would be still tested occasionally.
> > 2) Less work for setting up the jobs as we do the installs from the
> > stable branch currently (vs. installing the last from tarball)
> >
> > What we should have as requirements for doing this:
> > 1) Someone making the changes to the jobs so that the grenade job gets
> ran periodically.
> > 2) Someone looking after these jobs.
> > 3) Criteria for stop doing this, X failed runs, some set timeperiod,
> > something else. (and removing the stable/juno branch)
> >
> > Big question ref the 2), what can we do if the grenade starts failing? In
> theory we won't be merging anything to kilo that _should_ cause this and we
> definitely will not be merging anything to Juno to fix these issues anymore.
> How much maintenance those grenade jobs themselves needs?
> >
> > So all in all, is the cost doing above too much to get indicator that tells 
> > us
> when Juno --> Kilo upgrade is not doable anymore?
> 
> Let's wait a bit for this discussion for the return of the Infra PTL from
> vacation, his input is critical to any decision we can make. Jeremy should be
> back on Monday.
> 
> --
> Thierry Carrez (ttx)

Sure, didn't know that he is on holidays, but there was a reason why I added 
infra and qa tags to the subject. Like you said infra being able to facilitate 
this is crucial for any plans.

- Erno
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-20 Thread Kuvaja, Erno
> -Original Message-
> From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> Sent: Tuesday, November 17, 2015 2:57 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [stable] Preparing 2014.2.4 (Juno) WAS Re:
> [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.
> 
> 
> 
> On 11/16/2015 8:49 PM, Rochelle Grober wrote:
> > I would like to make a plea that while Juno is locked down so as no changes
> can be made against it, the branch remains on the git.openstack.org site.
> Please?  One area that could be better investigated with the branch in place
> is upgrade.  Kilo will continue to get patches, as will Liberty, so an 
> occasional
> grenade run (once a week?  more often?  Less often) could help operators
> understand what is in store for them when they finally can upgrade from
> Juno.  Yes, it will require occasional resources for the run, but I think 
> this is
> one of the cheapest forms of insurance in support of the installed base of
> users, before a Stable Release team is put together.
> >
> > My $.02
> >
> > --Rocky
> >
> >> -Original Message-
> >> From: Gary Kotton [mailto:gkot...@vmware.com]
> >> Sent: Friday, November 13, 2015 6:04 AM
> >> To: Flavio Percoco; OpenStack Development Mailing List (not for usage
> >> questions)
> >> Subject: Re: [openstack-dev] [stable] Preparing 2014.2.4 (Juno) WAS Re:
> >> [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.
> >>
> >>
> >>
> >> On 11/13/15, 3:23 PM, "Flavio Percoco"  wrote:
> >>
> >>> On 10/11/15 16:11 +0100, Alan Pevec wrote:
>  Hi,
> 
>  while we continue discussion about the future of stable branches in
>  general and stable/juno in particular, I'd like to execute the
> >> current
>  plan which was[1]
> 
>  2014.2.4 (eol) early November, 2015. release manager: apevec
> 
>  Iff there's enough folks interested (I'm not) in keep Juno alive
> >>
> >> +1 I do not see any reason why we should still invest time and effort
> >> here. Lets focus on stable/kilo
> >>
>  longer, they could resurrect it but until concrete plan is done
>  let's be honest and stick to the agreed plan.
> 
>  This is a call to stable-maint teams for Nova, Keystone, Glance,
>  Cinder, Neutron, Horizon, Heat, Ceilometer, Trove and Sahara to
> >> review
>  open stable/juno changes[2] and approve/abandon them as
> appropriate.
>  Proposed timeline is:
>  * Thursday Nov 12 stable/juno freeze[3]
>  * Thursday Nov 19 release 2014.2.1
> 
> >>>
> >>> General ack from a stable-maint point of view! +1 on the above
> >>>
> >>> Flavio
> >>>
>  Cheers,
>  Alan
> 
>  [1]
>  https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.
>  2F
> >> juno
>  _releases_.2812_months.29
> 
>  [2]
> 
> https://review.openstack.org/#/q/status:open+AND+branch:stable/juno
>  +A
> >> ND+%
> 
> 28project:openstack/nova+OR+project:openstack/keystone+OR+project:o
>  pe
> >> nsta
> 
> ck/glance+OR+project:openstack/cinder+OR+project:openstack/neutron+
>  OR
> >> +pro
> 
> ject:openstack/horizon+OR+project:openstack/heat+OR+project:opensta
>  ck
> >> /cei
> 
> lometer+OR+project:openstack/trove+OR+project:openstack/sahara%29,n
>  lometer+OR+,z
> 
>  [3] documented  in
> 
> https://wiki.openstack.org/wiki/StableBranch#Stable_release_manager
>  s
>  TODO add in new location
>  http://docs.openstack.org/project-team-guide/stable-branches.html
> 
> 
> __
> _
>  __
> >> 
>  _
>  OpenStack Development Mailing List (not for usage questions)
>  Unsubscribe:
>  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>> --
> >>> @flaper87
> >>> Flavio Percoco
> >>
> >>
> >>
> __
> ___
> >> __
> >> ___
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: OpenStack-dev-
> >> requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> I'm assuming you mean grenade runs on stable/kilo. A grenade job on
> stable/kilo is installing stable/juno and then upgrading to stable/kilo (the
> change being tested is on stable/kilo). The grenade jobs for stable/juno were
> stopped when icehouse-eol happened.
> 
> Arguably we could still be testing grenade on stable/kilo by just installing
> Juno 2014.2.4 (last Juno 

Re: [openstack-dev] [stable][neutron] How we handle Kilo backports

2015-11-19 Thread Kuvaja, Erno
> -Original Message-
> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
> Sent: Thursday, November 19, 2015 10:43 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [stable][neutron] How we handle Kilo
> backports
> 
> Tony Breeds  wrote:
> 
> > On Wed, Nov 18, 2015 at 05:44:38PM +0100, Ihar Hrachyshka wrote:
> >> Hi all,
> >>
> >> as per [1] I imply that all projects under stable-maint-core team
> >> supervision must abide the stable policy [2] which limits the types
> >> of backports for N-2 branches (now it’s stable/kilo) to "Only
> >> critical bugfixes and security patches”. With that, I remind all
> >> stable core members about the rule.
> >>
> >> Since we are limited to ‘critical bugfixes’ only, and since there is
> >> no clear definition of what ‘critical’ means, I guess we should
> >> define it for ourselves.
> >>
> >> In Neutron world, we usually use Critical importance for those bugs
> >> that break gate. High is used for those bugs that have high impact
> >> production wise. With that in mind, I suggest we define ‘critical’
> >> bugfixes as Critical
> >> + High in LP. Comments on that?
> >
> > So I'm not a core but I check the severity of the bug and query the
> > review owner if it is < High.  My rationale is that sometimes bugs are
> > mis-classified, someone took the time to backport it so it's critical
> > to that person if not the project.
> >
> > Note that doesn't mean they'll get in but it facilitates the discussion.
> >
> > Anyway we can iterate on this: https://review.openstack.org/247229
> 
> I believe it’s fine to change bug importance later based on revealed data, or
> when initial triaging was not correct. I think making clear that discussion
> about backport applicability for a branch should be set around LP importance
> field may add more transparency to how we select backport candidates.
> 
> I also believe that we should not be afraid of other backport types, as long 
> as
> we may guarantee their safety (f.e. it’s ok to backport a fix stability fix; 
> a fix
> that adds more test coverage; a fix for a typo in a message or a config file;
> etc.; yes, those types of bugs are not high impact, but there is no real 
> reason
> not to deliver them to users).
> 
> I sent a patch to stable policy to clarify the latter:
> 
> https://review.openstack.org/247415
> 
> Ihar

Stability fixes might make sense case by case basis. And I have been looking 
all the backport more or less case by case basis anyways. Anything that throws 
500 from the api and the fix does not change other behavior is IMO critical fix.

Typo fixes are not good idea for stable branches. As it might be bit annoying 
or amusing for the English user fixing typos in kilo will mean that it breaks 
translation for all the rest and I haven't seen any translation patches being 
proposed to older stable branches. 

- Erno
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Making stable maintenance its own OpenStack project team

2015-11-10 Thread Kuvaja, Erno
> -Original Message-
> From: Matthew Treinish [mailto:mtrein...@kortar.org]
> Sent: Tuesday, November 10, 2015 3:12 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [stable] Making stable maintenance its own
> OpenStack project team
> 
> On Mon, Nov 09, 2015 at 10:54:43PM +, Kuvaja, Erno wrote:
> > > On Mon, Nov 09, 2015 at 05:28:45PM -0500, Doug Hellmann wrote:
> > > > Excerpts from Matt Riedemann's message of 2015-11-09 16:05:29 -0600:
> > > > >
> > > > > On 11/9/2015 10:41 AM, Thierry Carrez wrote:
> > > > > > Hi everyone,
> > > > > >
> > > > > > A few cycles ago we set up the Release Cycle Management team
> > > > > > which was a bit of a frankenteam of the things I happened to be
> leading:
> > > > > > release management, stable branch maintenance and
> > > > > > vulnerability
> > > management.
> > > > > > While you could argue that there was some overlap between
> > > > > > those functions (as in, "all these things need to be
> > > > > > released") logic was not the primary reason they were put together.
> > > > > >
> > > > > > When the Security Team was created, the VMT was spinned out of
> > > > > > the Release Cycle Management team and joined there. Now I
> > > > > > think we should spin out stable branch maintenance as well:
> > > > > >
> > > > > > * A good chunk of the stable team work used to be stable point
> > > > > > release management, but as of stable/liberty this is now done
> > > > > > by the release management team and triggered by the
> > > > > > project-specific stable maintenance teams, so there is no more
> > > > > > overlap in tooling used there
> > > > > >
> > > > > > * Following the kilo reform, the stable team is now focused on
> > > > > > defining and enforcing a common stable branch policy[1],
> > > > > > rather than approving every patch. Being more visible and
> > > > > > having more dedicated members can only help in that very
> > > > > > specific mission
> > > > > >
> > > > > > * The release team is now headed by Doug Hellmann, who is
> > > > > > focused on release management and does not have the history I
> > > > > > had with stable branch policy. So it might be the right moment
> > > > > > to refocus release management solely on release management and
> > > > > > get the stable team its own leadership
> > > > > >
> > > > > > * Empowering that team to make its own decisions, giving it
> > > > > > more visibility and recognition will hopefully lead to more
> > > > > > resources being dedicated to it
> > > > > >
> > > > > > * If the team expands, it could finally own stable branch
> > > > > > health and gate fixing. If that ends up all falling under the
> > > > > > same roof, that team could make decisions on support
> > > > > > timeframes as well, since it will be the primary resource to
> > > > > > make that work
> > > > >
> > > > > Isn't this kind of already what the stable maint team does?
> > > > > Well, that and some QA people like mtreinish and sdague.
> > > > >
> > > > > >
> > > > > > So.. good idea ? bad idea ? What do current
> > > > > > stable-maint-core[2] members think of that ? Who thinks they
> > > > > > could step up to lead that
> > > team ?
> > > > > >
> > > > > > [1]
> > > > > > http://docs.openstack.org/project-team-guide/stable-branches.h
> > > > > > tml [2]
> > > > > > https://review.openstack.org/#/admin/groups/530,members
> > > > > >
> > > > >
> > > > > With the decentralizing of the stable branch stuff in Liberty
> > > > > [1] it seems like there would be less use for a PTL for stable
> > > > > branch maintenance - the cats are now herding themselves, right?
> > > > > Or at least that's the plan as far as I understood it. And the
> > > > > existing stable branch wizards are more or less around for help
> > > > > and 

Re: [openstack-dev] [stable] Making stable maintenance its own OpenStack project team

2015-11-09 Thread Kuvaja, Erno
> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: 09 November 2015 16:42
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [stable] Making stable maintenance its own
> OpenStack project team
> 
> Hi everyone,
> 
> A few cycles ago we set up the Release Cycle Management team which was a
> bit of a frankenteam of the things I happened to be leading: release
> management, stable branch maintenance and vulnerability management.
> While you could argue that there was some overlap between those functions
> (as in, "all these things need to be released") logic was not the primary
> reason they were put together.
> 
> When the Security Team was created, the VMT was spinned out of the
> Release Cycle Management team and joined there. Now I think we should
> spin out stable branch maintenance as well:
> 
> * A good chunk of the stable team work used to be stable point release
> management, but as of stable/liberty this is now done by the release
> management team and triggered by the project-specific stable maintenance
> teams, so there is no more overlap in tooling used there
> 
> * Following the kilo reform, the stable team is now focused on defining and
> enforcing a common stable branch policy[1], rather than approving every
> patch. Being more visible and having more dedicated members can only help
> in that very specific mission
> 
> * The release team is now headed by Doug Hellmann, who is focused on
> release management and does not have the history I had with stable branch
> policy. So it might be the right moment to refocus release management
> solely on release management and get the stable team its own leadership
> 
> * Empowering that team to make its own decisions, giving it more visibility
> and recognition will hopefully lead to more resources being dedicated to it
> 
> * If the team expands, it could finally own stable branch health and gate
> fixing. If that ends up all falling under the same roof, that team could make
> decisions on support timeframes as well, since it will be the primary resource
> to make that work
> 
> So.. good idea ? bad idea ? What do current stable-maint-core[2] members
> think of that ? Who thinks they could step up to lead that team ?
> 
> [1] http://docs.openstack.org/project-team-guide/stable-branches.html
> [2] https://review.openstack.org/#/admin/groups/530,members
> 
> --
> Thierry Carrez (ttx)


Hi Thierry,

Thanks for bringing this up. The timing couldn't have been better.

I had lengthy discussion with Flavio Percoco in Tokyo. He asked me what would 
be the things I'd like to see TC taking on and fix over next cycle. After 
ranting to him over 20min how I'd like to have actual stable branches in 
OpenStack, not just tag and branch called stable and find the will and 
resources to do that, I turned to him and asked "But we really do not need TC 
for that do we?"

I'm not part of the global stable-maint-core but I have been stable branch 
liaison for Glance for year now. And I can say that our stable branches are not 
just tag and branch, they are actually maintained, by really small group of 
people (which has btw been growing over that whole time). Ihar started 
discussion to implement something similar to Neutron stable branches, which 
proves to me that there is will to work on this problem.

Based on the discussion around wishes to extend support of Juno, there is 
definitely urge to have our stable releases maintained and supported longer. I 
don't think Juno is reasonable expectation to see that happen, but perhaps we 
get there.

I think what you are proposing here is the right direction to go, not because 
it used to be part of "frankenteam" led by you, but because I think the group 
of people working on our stable branches deserves and needs the recognition for 
the work. I think that would be the first step to get the people involved. 
After moving the ownership to individual projects I'd like to see this team, 
not replacing that but being liberally inclusive cross-project (meta) team to 
unite the efforts between the projects and making the lives of those people 
easier and the efforts more justified. I'm in a lucky position as after coming 
back home I had the discussion with my management and got the commitment to 
spend significant amount of my time to work on stable across OpenStack projects.

I do realize that there is very small and dedicated group of people fixing the 
gates and doing all the magic around that issue. If we get more traction and 
interest of doing backports proactively, hopefully the health of those stable 
gates would interest more people as well and we could spread that load away 
from those so few and valuable folks.

Based on this I would like to put my name up for the task.

Will it be me or someone else leading this team if it gets formed, count me in. 
I will put my effort to make our stable branches better anyways.

Best,
Erno (jokke_) Kuvaja

> 
> 

Re: [openstack-dev] [stable] Making stable maintenance its own OpenStack project team

2015-11-09 Thread Kuvaja, Erno
> -Original Message-
> From: Matthew Treinish [mailto:mtrein...@kortar.org]
> Sent: 09 November 2015 22:40
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [stable] Making stable maintenance its own
> OpenStack project team
> 
> On Mon, Nov 09, 2015 at 05:28:45PM -0500, Doug Hellmann wrote:
> > Excerpts from Matt Riedemann's message of 2015-11-09 16:05:29 -0600:
> > >
> > > On 11/9/2015 10:41 AM, Thierry Carrez wrote:
> > > > Hi everyone,
> > > >
> > > > A few cycles ago we set up the Release Cycle Management team which
> > > > was a bit of a frankenteam of the things I happened to be leading:
> > > > release management, stable branch maintenance and vulnerability
> management.
> > > > While you could argue that there was some overlap between those
> > > > functions (as in, "all these things need to be released") logic
> > > > was not the primary reason they were put together.
> > > >
> > > > When the Security Team was created, the VMT was spinned out of the
> > > > Release Cycle Management team and joined there. Now I think we
> > > > should spin out stable branch maintenance as well:
> > > >
> > > > * A good chunk of the stable team work used to be stable point
> > > > release management, but as of stable/liberty this is now done by
> > > > the release management team and triggered by the project-specific
> > > > stable maintenance teams, so there is no more overlap in tooling
> > > > used there
> > > >
> > > > * Following the kilo reform, the stable team is now focused on
> > > > defining and enforcing a common stable branch policy[1], rather
> > > > than approving every patch. Being more visible and having more
> > > > dedicated members can only help in that very specific mission
> > > >
> > > > * The release team is now headed by Doug Hellmann, who is focused
> > > > on release management and does not have the history I had with
> > > > stable branch policy. So it might be the right moment to refocus
> > > > release management solely on release management and get the stable
> > > > team its own leadership
> > > >
> > > > * Empowering that team to make its own decisions, giving it more
> > > > visibility and recognition will hopefully lead to more resources
> > > > being dedicated to it
> > > >
> > > > * If the team expands, it could finally own stable branch health
> > > > and gate fixing. If that ends up all falling under the same roof,
> > > > that team could make decisions on support timeframes as well,
> > > > since it will be the primary resource to make that work
> > >
> > > Isn't this kind of already what the stable maint team does? Well,
> > > that and some QA people like mtreinish and sdague.
> > >
> > > >
> > > > So.. good idea ? bad idea ? What do current stable-maint-core[2]
> > > > members think of that ? Who thinks they could step up to lead that
> team ?
> > > >
> > > > [1]
> > > > http://docs.openstack.org/project-team-guide/stable-branches.html
> > > > [2] https://review.openstack.org/#/admin/groups/530,members
> > > >
> > >
> > > With the decentralizing of the stable branch stuff in Liberty [1] it
> > > seems like there would be less use for a PTL for stable branch
> > > maintenance - the cats are now herding themselves, right? Or at
> > > least that's the plan as far as I understood it. And the existing
> > > stable branch wizards are more or less around for help and answering
> questions.
> >
> > The same might be said about releasing from master and the release
> > management team. There's still some benefit to having people dedicated
> > to making sure projects all agree to sane policies and to keep up with
> > deliverables that need to be released.
> 
> Except the distinction is that relmgt is actually producing something. Relmgt
> has the releases repo which does centralize library releases, reno to do the
> release notes, etc. What does the global stable core do? Right now it's there
> almost entirely to just add people to the project specific stable core teams.
> 
> -Matt Treinish


I'd like to move the discussion from what are the roles of the current 
stable-maint-core and more towards what the benefits would be having a 
stable-maint team rather than the -core group alone.

Personally I think the stable maintenance should be quite a lot more than 
unblocking gate and approving people allowed to merge to the stable branches.

- Erno
> 
> > >
> > > [1]
> > > http://lists.openstack.org/pipermail/openstack-dev/2015-November/078
> > > 281.html
> > >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Kuvaja, Erno
> -Original Message-
> From: Tony Breeds [mailto:t...@bakeyournoodle.com]
> Sent: Friday, November 06, 2015 6:15 AM
> To: OpenStack Development Mailing List
> Cc: openstack-operat...@lists.openstack.org
> Subject: [openstack-dev] [stable][all] Keeping Juno "alive" for longer.
> 
> Hello all,
> 
> I'll start by acknowledging that this is a big and complex issue and I do not
> claim to be across all the view points, nor do I claim to be particularly
> persuasive ;P
> 
> Having stated that, I'd like to seek constructive feedback on the idea of
> keeping Juno around for a little longer.  During the summit I spoke to a
> number of operators, vendors and developers on this topic.  There was some
> support and some "That's crazy pants!" responses.  I clearly didn't make it
> around to everyone, hence this email.

I'm not big fan of this idea, number of reasons below.
> 
> Acknowledging my affiliation/bias:  I work for Rackspace in the private cloud
> team.  We support a number of customers currently running Juno that are,
> for a variety of reasons, challenged by the Kilo upgrade.

I'm working at HPE in the Cloud Engineering team, fwiw.
> 
> Here is a summary of the main points that have come up in my
> conversations, both for and against.
> 
> Keep Juno:
>  * According to the current user survey[1] Icehouse still has the
>biggest install base in production clouds.  Juno is second, which makes
>sense. If we EOL Juno this month that means ~75% of production clouds
>will be running an EOL'd release.  Clearly many of these operators have
>support contracts from their vendor, so those operators won't be left
>completely adrift, but I believe it's the vendors that benefit from keeping
>Juno around. By working together *in the community* we'll see the best
>results.

As you say there should some support base for these releases. Unfortunately 
that has had really small reflection to upstream. It looks like these vendors 
and operators keep backporting to their own forks, but do not propose the 
backports to upstream branches, or these installations are not really 
maintained.
> 
>  * We only recently EOL'd Icehouse[2].  Sure it was well communicated, but
> we
>still have a huge Icehouse/Juno install base.
> 
> For me this is pretty compelling but for balance 
> 
> Keep the current plan and EOL Juno Real Soon Now:
>  * There is also no ignoring the elephant in the room that with HP stepping
>back from public cloud there are questions about our CI capacity, and
>keeping Juno will have an impact on that critical resource.

I leave this point open as I do not know what our plans towards infra are. 
Perhaps someone could shed some light who does know.
> 
>  * Juno (and other stable/*) resources have a non-zero impact on *every*
>project, esp. @infra and release management.  We need to ensure this
>isn't too much of a burden.  This mostly means we need enough
> trustworthy
>volunteers.

This has been the main driver for shorter support cycles so far. The group 
maintaining stable branches is small and at least I haven't seen huge increase 
on that lately. Stable branches are getting bit more attention again and some 
great work has been done to ease up the workloads, but same time we get loads 
of new features and projects in that has affect on infra (resource wise) and 
gate stability.
> 
>  * Juno is also tied up with Python 2.6 support. When
>Juno goes, so will Python 2.6 which is a happy feeling for a number of
>people, and more importantly reduces complexity in our project
>infrastructure.

I know lots of people have been waiting this, myself included.
> 
>  * Even if we keep Juno for 6 months or 1 year, that doesn't help vendors
>that are "on the hook" for multiple years of support, so for that case
>we're really only delaying the inevitable.
> 
>  * Some number of the production clouds may never migrate from $version,
> in
>which case longer support for Juno isn't going to help them.

Both very true.
> 
> 
> I'm sure these question were well discussed at the VYR summit where we
> set the EOL date for Juno, but I was new then :) What I'm asking is:
> 
> 1) Is it even possible to keep Juno alive (is the impact on the project as
>a whole acceptable)?

Based on current status I do not think so.
> 
> Assuming a positive answer:
> 
> 2) Who's going to do the work?
> - Me, who else?

This is one of the key questions.

> 3) What do we do if people don't actually do the work but we as a community
>have made a commitment?

This was done in YVR, we decided to cut the losses and EOL early.

> 4) If we keep Juno alive for $some_time, does that imply we also bump the
>life cycle on Kilo and liberty and Mitaka etc?

That would be logical thing to do. At least I don't think Juno was anything 
that special that it would deserve different schedule than Kilo, Liberty, etc.
> 
> Yours Tony.
> 
> [1] 

Re: [openstack-dev] [neutron][stable] proactive backporting

2015-10-19 Thread Kuvaja, Erno
> -Original Message-
> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
> Sent: Friday, October 16, 2015 1:34 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [neutron][stable] proactive backporting
> 
> Hi all,
> 
> I’d like to introduce a new initiative around stable branches for neutron
> official projects (neutron, neutron-*aas, python-neutronclient) that is
> intended to straighten our backporting process and make us more proactive
> in fixing bugs in stable branches. ‘Proactive' meaning: don’t wait until a
> known bug hits a user that consumes stable branches, but backport fixes in
> advance quickly after they hit master.
> 
> The idea is simple: every Fri I walk thru the new commits merged into master
> since last check; produce lists of bugs that are mentioned in Related-
> Bug/Closes-Bug; paste them into:
> 
> https://etherpad.openstack.org/p/stable-bug-candidates-from-master
> 
> Then I click thru the bug report links to determine whether it’s worth a
> backport and briefly classify them. If I have cycles, I also request backports
> where it’s easy (== a mere 'Cherry-Pick to' button click).
> 
> After that, those interested in maintaining neutron stable branches can take
> those bugs one by one and handle them, which means: checking where it
> really applies for backport; creating backport reviews (solving conflicts,
> making tests pass). After it’s up for review for all branches affected and
> applicable, the bug is removed from the list.
> 
> I started on that path two weeks ago, doing initial swipe thru all commits
> starting from stable/liberty spin off. If enough participants join the 
> process,
> we may think of going back into git history to backport interesting fixes from
> stable/liberty into stable/kilo.
> 
> Don’t hesitate to ask about details of the process, and happy backporting,
> 
> Ihar

Hi,

This looks like neat way to do it. In Glance we're doing constantly proactive 
backporting and I have been nominating bugs for series' and approving backports 
for a while now. We prefer not to have user coming to us and telling that they 
hit to bug in "stable" we had known already for ages, just didn't bother to 
backport the fix.  It has worked out really well and people are learning to 
propose these without me needing to read every single commit message. 

Good luck, has worked great for us!

- Erno
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Models and validation for v2

2015-10-01 Thread Kuvaja, Erno
Kairat,

We do not validate against schema on image-list (see 
43769d6cc7266d7c81db31ad58b4fa403c35b611). This said there was discussion 
around throwing all that validation code out, exactly like Jay said, we should 
not validate responses coming from our own servers.

This discussion happened just under 1.0.0 release of glanceclient which moved 
to defaulting v2 Images API at CLI and we didn’t see it reasonable to wait 
until we get that validation cleanup done. That said, the work is in pipeline 
to be done after we get more important things (like Liberty release) out of 
hands first.


-  Erno

From: Kairat Kushaev [mailto:kkush...@mirantis.com]
Sent: Wednesday, September 30, 2015 7:33 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Models and validation for v2

Agree with you. That's why I am asking about reasoning. Perhaps, we need to 
realize how to get rid of this in glanceclient.

Best regards,
Kairat Kushaev

On Wed, Sep 30, 2015 at 7:04 PM, Jay Pipes 
> wrote:
On 09/30/2015 09:31 AM, Kairat Kushaev wrote:
Hi All,
In short terms, I am wondering why we are validating responses from
server when we are doing
image-show, image-list, member-list, metadef-namespace-show and other
read-only requests.

AFAIK, we are building warlock models when receiving responses from
server (see [0]). Each model requires schema to be fetched from glance
server. It means that each time we are doing image-show, image-list,
image-create, member-list and others we are requesting schema from the
server. AFAIU, we are using models to dynamically validate that object
is in accordance with schema but is it the case when glance receives
responses from the server?

Could somebody please explain me the reasoning of this implementation?
Am I missed some usage cases when validation is required for server
responses?

I also noticed that we already faced some issues with such
implementation that leads to "mocking" validation([1][2]).

The validation should not be done for responses, only ever requests (and it's 
unclear that there is value in doing this on the client side at all, IMHO).

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][glance] glance-stable-maint group refresher

2015-09-30 Thread Kuvaja, Erno
Hi all,

I'd like to propose following changes to glance-stable-maint team:

1)  Removing Zhi Yan Liu from the group; unfortunately he has moved on to 
other ventures and is not actively participating our operations anymore.

2)  Adding Mike Fedosin to the group; Mike has been reviewing and 
backporting patches to glance stable branches and is working with the right 
mindset. I think he would be great addition to share the workload around.

Best,
Erno (jokke_) Kuvaja
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cross-Project Meeting Tue 29th of Sep, 21:00 UTC

2015-09-29 Thread Kuvaja, Erno
Dear PTLs, cross-project liaisons and anyone else interested,



We'll have a cross-project meeting today at 21:00 UTC, with the

following agenda:



* Review of past action items

* Team announcements (horizontal, vertical, diagonal)

* Cross-Project Specs to discuss:

** Service Catalog Standardization [0]

** Backwards compatibility for clients and libraries [1]

* Open discussion



[0] https://review.openstack.org/181393

[1] https://review.openstack.org/226157



If you're from an horizontal team (Release management, QA, Infra, Docs,

Security, I18n...) or a vertical team (Nova, Swift, Keystone...) and

have something to communicate to the other teams, feel free to abuse the

relevant sections of that meeting and make sure it gets #info-ed by the

meetbot in the meeting summary.



See you there !



For more details on this meeting, please see:

https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting



--

Erno (jokke_) Kuvaja

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] proposed priorities for Mitaka

2015-09-15 Thread Kuvaja, Erno
> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com]
> Sent: Monday, September 14, 2015 5:40 PM
> To: openstack-dev
> Subject: Re: [openstack-dev] [glance] proposed priorities for Mitaka
> 
> Excerpts from Kuvaja, Erno's message of 2015-09-14 15:02:59 +:
> > > -Original Message-
> > > From: Flavio Percoco [mailto:fla...@redhat.com]
> > > Sent: Monday, September 14, 2015 1:41 PM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [glance] proposed priorities for Mitaka
> > >
> > > On 14/09/15 08:10 -0400, Doug Hellmann wrote:

> > > >
> > > >I. DefCore
> > > >
> > > >The primary issue that attracted my attention was the fact that
> > > >DefCore cannot currently include an image upload API in its
> > > >interoperability test suite, and therefore we do not have a way to
> > > >ensure interoperability between clouds for users or for trademark
> > > >use. The DefCore process has been long, and at times confusing,
> > > >even to those of us following it sort of closely. It's not entirely
> > > >surprising that some projects haven't been following the whole
> > > >time, or aren't aware of exactly what the whole thing means. I have
> > > >proposed a cross-project summit session for the Mitaka summit to
> > > >address this need for communication more broadly, but I'll try to
> summarize a bit here.
> > >
> >
> > Looking how different OpenStack based public clouds limits or fully
> prevents their users to upload images to their deployments, I'm not
> convinced the Image Upload should be included to this definition.
> 
> The problem with that approach is that it means end consumers of those
> clouds cannot write common tools that include image uploads, which is a
> frequently used/desired feature. What makes that feature so special that we
> don't care about it for interoperability?
> 

I'm not sure it really is so special API or technical wise, it's just the one 
that was lifted to the pedestal in this discussion.

> >
> > > +1
> > >
> > > I think it's quite sad that some projects, especially those
> > > considered to be part of the `starter-kit:compute`[0], don't follow
> > > closely what's going on in DefCore. I personally consider this a
> > > task PTLs should incorporate in their role duties. I'm glad you
> > > proposed such session, I hope it'll help raising awareness of this effort
> and it'll help moving things forward on that front.
> > >
> > >
> > > >
> > > >DefCore is using automated tests, combined with business policies,
> > > >to build a set of criteria for allowing trademark use. One of the
> > > >goals of that process is to ensure that all OpenStack deployments
> > > >are interoperable, so that users who write programs that talk to
> > > >one cloud can use the same program with another cloud easily. This
> > > >is a *REST
> > > >API* level of compatibility. We cannot insert cloud-specific
> > > >behavior into our client libraries, because not all cloud consumers
> > > >will use those libraries to talk to the services. Similarly, we
> > > >can't put the logic in the test suite, because that defeats the
> > > >entire purpose of making the APIs interoperable. For this level of
> > > >compatibility to work, we need well-defined APIs, with a long
> > > >support period, that work the same no matter how the cloud is
> > > >deployed. We need the entire community to support this effort. From
> > > >what I can tell, that is going to require some changes to the
> > > >current Glance API to meet the requirements. I'll list those
> > > >requirements, and I hope we can discuss them to a degree that
> > > >ensures everyone understands them. I don't want this email thread
> > > >to get bogged down in implementation details or API designs,
> > > >though, so let's try to keep the discussion at a somewhat high
> > > >level, and leave the details for specs and summit discussions. I do
> > > >hope you will correct any misunderstandings or misconceptions,
> > > >because unwinding this as an outside observer has been quite a
> challenge and it's likely I have some details wrong.
> >
> > This just reinforces my doubt above. By including upload to the defcore
> requirements probably just closes out lots of the public clouds out there. Is
> that the intention here?
> 
> No, absolutely not. The intention is to provide clear technical direction 
> about
> what we think the API for uploading images should be.
> 

Gr8, that's easy goal to stand behind and support!

> >
> > > >

> > >
> > > The task upload process you're referring to is the one that uses the
> > > `import` task, which allows you to download an image from an
> > > external source, asynchronously, and import it in Glance. This is
> > > the old `copy-from` behavior that was moved into a task.
> > >
> > > The "fun" thing about this - and I'm sure other folks in the Glance
> > > community will disagree - is that I don't consider tasks to be a
> > > public API. That is to say, 

Re: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my concerns

2015-09-14 Thread Kuvaja, Erno
> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: Monday, September 14, 2015 9:03 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my
> concerns
> 

> 
> Or are you suggesting it is preferable to hide that risk from our
> operators/users, to protect that project team developers ?
> 
> --
> Thierry Carrez (ttx)
> 
Unfortunately this seems to be the trend, not only in  
but in society. Everything needs to be everyone friendly and politically 
correct, it's not ok to talk about difficult topics with their real names 
because someone involved might get their feelings hurt, it's not ok to compete 
as losers might get their feelings hurt.

While being bit double edged sword I think this is exact example of such. One 
could argue if the project has reason to exist if saying out loud "it does not 
have diversity in its development community" will kill it. I think there is 
good amount of examples both ways in open source world where abandoned projects 
get picked up as there is people thinking they still have use case and value, 
on the other side maybe promising projects gets forgotten because no-one else 
really felt the urge to keep 'em alive.

Personally I feel this being bit like stamping feature experimental. "Please 
feel free to play around with it, but we do discourage you to deploy it in your 
production unless you're willing to pick up the maintenance of it in the case 
the team decides to do something else." There is nothing wrong with that.

I don't think these should be hiding behind the valance of the big tent and the 
consumer expectations should be set at least close to the reality without them 
needing to do huge amount of detective work. That was the point of the tags in 
first place, no?

Obviously above is just my blunt self. If someone went and rage killed their 
project because of that, good for you, now get yourself together and do it 
again. ;)

- Erno (jokke) Kuvaja

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] final liberty cycle client library releases needed

2015-09-14 Thread Kuvaja, Erno
Hi Doug,

Please find python-glanceclient 1.0.1 release request 
https://review.openstack.org/#/c/222716/

- Erno

> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com]
> Sent: Monday, September 14, 2015 1:46 PM
> To: openstack-dev
> Subject: [openstack-dev] [all][ptl][release] final liberty cycle client 
> library
> releases needed
> 
> PTLs and release liaisons,
> 
> In order to keep the rest of our schedule for the end-of-cycle release tasks,
> we need to have final releases for all client libraries in the next day or 
> two.
> 
> If you have not already submitted your final release request for this cycle,
> please do that as soon as possible.
> 
> If you *have* already submitted your final release request for this cycle,
> please reply to this email and let me know that you have so I can create your
> stable/liberty branch.
> 
> Thanks!
> Doug
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] proposed priorities for Mitaka

2015-09-14 Thread Kuvaja, Erno
> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: Monday, September 14, 2015 1:41 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [glance] proposed priorities for Mitaka
> 
> On 14/09/15 08:10 -0400, Doug Hellmann wrote:
> >
> >After having some conversations with folks at the Ops Midcycle a few
> >weeks ago, and observing some of the more recent email threads related
> >to glance, glance-store, the client, and the API, I spent last week
> >contacting a few of you individually to learn more about some of the
> >issues confronting the Glance team. I had some very frank, but I think
> >constructive, conversations with all of you about the issues as you see
> >them. As promised, this is the public email thread to discuss what I
> >found, and to see if we can agree on what the Glance team should be
> >focusing on going into the Mitaka summit and development cycle and how
> >the rest of the community can support you in those efforts.
> >
> >I apologize for the length of this email, but there's a lot to go over.
> >I've identified 2 high priority items that I think are critical for the
> >team to be focusing on starting right away in order to use the upcoming
> >summit time effectively. I will also describe several other issues that
> >need to be addressed but that are less immediately critical. First the
> >high priority items:
> >
> >1. Resolve the situation preventing the DefCore committee from
> >   including image upload capabilities in the tests used for trademark
> >   and interoperability validation.
> >
> >2. Follow through on the original commitment of the project to
> >   provide an image API by completing the integration work with
> >   nova and cinder to ensure V2 API adoption.
> 
> Hi Doug,
> 
> First and foremost, I'd like to thank you for taking the time to dig into 
> these
> issues, and for reaching out to the community seeking for information and a
> better understanding of what the real issues are. I can imagine how much
> time you had to dedicate on this and I'm glad you did.

++ Really thanks for taking the time for this.
> 
> Now, to your email, I very much agree with the priorities you mentioned
> above and I'd like for, whomever will win Glance's PTL election, to bring 
> focus
> back on that.
> 
> Please, find some comments in-line for each point:
> 
> 
> >
> >I. DefCore
> >
> >The primary issue that attracted my attention was the fact that DefCore
> >cannot currently include an image upload API in its interoperability
> >test suite, and therefore we do not have a way to ensure
> >interoperability between clouds for users or for trademark use. The
> >DefCore process has been long, and at times confusing, even to those of
> >us following it sort of closely. It's not entirely surprising that some
> >projects haven't been following the whole time, or aren't aware of
> >exactly what the whole thing means. I have proposed a cross-project
> >summit session for the Mitaka summit to address this need for
> >communication more broadly, but I'll try to summarize a bit here.
> 

Looking how different OpenStack based public clouds limits or fully prevents 
their users to upload images to their deployments, I'm not convinced the Image 
Upload should be included to this definition.
 
> +1
> 
> I think it's quite sad that some projects, especially those considered to be
> part of the `starter-kit:compute`[0], don't follow closely what's going on in
> DefCore. I personally consider this a task PTLs should incorporate in their 
> role
> duties. I'm glad you proposed such session, I hope it'll help raising 
> awareness
> of this effort and it'll help moving things forward on that front.
> 
> 
> >
> >DefCore is using automated tests, combined with business policies, to
> >build a set of criteria for allowing trademark use. One of the goals of
> >that process is to ensure that all OpenStack deployments are
> >interoperable, so that users who write programs that talk to one cloud
> >can use the same program with another cloud easily. This is a *REST
> >API* level of compatibility. We cannot insert cloud-specific behavior
> >into our client libraries, because not all cloud consumers will use
> >those libraries to talk to the services. Similarly, we can't put the
> >logic in the test suite, because that defeats the entire purpose of
> >making the APIs interoperable. For this level of compatibility to work,
> >we need well-defined APIs, with a long support period, that work the
> >same no matter how the cloud is deployed. We need the entire community
> >to support this effort. From what I can tell, that is going to require
> >some changes to the current Glance API to meet the requirements. I'll
> >list those requirements, and I hope we can discuss them to a degree
> >that ensures everyone understands them. I don't want this email thread
> >to get bogged down in implementation details or API designs, though, so
> >let's 

Re: [openstack-dev] [Glance] glance core rotation part 1

2015-09-14 Thread Kuvaja, Erno
+1

From: Alex Meade [mailto:mr.alex.me...@gmail.com]
Sent: Friday, September 11, 2015 7:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] glance core rotation part 1

+1

On Fri, Sep 11, 2015 at 2:33 PM, Ian Cordasco 
> wrote:


-Original Message-
From: Nikhil Komawar >
Reply: OpenStack Development Mailing List (not for usage questions) 
>
Date: September 11, 2015 at 09:30:23
To: openstack-dev@lists.openstack.org 
>
Subject:  [openstack-dev] [Glance] glance core rotation part 1

> Hi,
>
> I would like to propose the following removals from glance-core based on
> the simple criterion of inactivity/limited activity for a long period (2
> cycles or more) of time:
>
> Alex Meade
> Arnaud Legendre
> Mark Washenberger
> Iccha Sethi

I think these are overdue

> Zhi Yan Liu (Limited activity in Kilo and absent in Liberty)

Sad to see Zhi Yan Liu's activity drop off.

> Please vote +1 or -1 and we will decide by Monday EOD PT.

+1

--
Ian Cordasco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] differences between def detail() and def index() in glance/registry/api/v1/images.py

2015-09-10 Thread Kuvaja, Erno
This was the case until about two weeks ago.

Since 1.0.0 release we have been defaulting to Images API v2 instead of v1 [0].

If you want to exercise the v1 functionality from the CLI client you would need 
to specify the either environmental variable OS_IMAGE_API_VERSION=1 or use the 
command line option –os-image-api-version 1. Either case –debug can be used 
with glanceclient to provide detailed information about where the request is 
being sent and what the responses are.

If you haven’t moved to the latest client yet, forget about the above apart 
from the –debug part.

[0] 
https://github.com/openstack/python-glanceclient/blob/master/doc/source/index.rst


-  Erno

From: Fei Long Wang [mailto:feil...@catalyst.net.nz]
Sent: Thursday, September 10, 2015 1:04 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance] differences between def detail() and def 
index() in glance/registry/api/v1/images.py

I assume you're using Glance client, if so, by default, when you issuing 
command 'glance image-list', it will call /v1/images/detail instead of 
/v1/images, you can use curl or any browser http client to see the difference. 
Basically, just like the endpoint name, /v1/images/detail will give you more 
details. See below difference of their response.

Response from /v1/images/detail
{
"images": [
{
"status": "active",
"deleted_at": null,
"name": "fedora-21-atomic-3",
"deleted": false,
"container_format": "bare",
"created_at": "2015-09-03T22:56:37.00",
"disk_format": "qcow2",
"updated_at": "2015-09-03T23:00:15.00",
"min_disk": 0,
"protected": false,
"id": "b940521b-97ff-48d9-a22e-ecc981ec0513",
"min_ram": 0,
"checksum": "d3b3da0e07743805dcc852785c7fc258",
"owner": "5f290ac4b100440b8b4c83fce78c2db7",
"is_public": true,
"virtual_size": null,
"properties": {
"os_distro": "fedora-atomic"
},
"size": 770179072
}
]
}

Response with /v1/images
{
"images": [
{
"name": "fedora-21-atomic-3",
"container_format": "bare",
"disk_format": "qcow2",
"checksum": "d3b3da0e07743805dcc852785c7fc258",
"id": "b940521b-97ff-48d9-a22e-ecc981ec0513",
"size": 770179072
}
]
}
On 10/09/15 11:46, Su Zhang wrote:

Hello,

I am hitting an error and its trace passes def index () in 
glance/registry/api/v1/images.py.

I assume def index() is called by glance image-list. However, while testing 
glance image-list I realized that def detail() is called under 
glance/registry/api/v1/images.py instead of def index().

Could someone let me know what's the difference between the two functions? How 
can I test out def index() under glance/registry/api/v1/images.py through CLI 
or API?

Thanks,

--
Su Zhang



__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Cheers & Best regards,

Fei Long Wang (王飞龙)

--

Senior Cloud Software Engineer

Tel: +64-48032246

Email: flw...@catalyst.net.nz

Catalyst IT Limited

Level 6, Catalyst House, 150 Willis Street, Wellington

--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

2015-09-03 Thread Kuvaja, Erno
Malini, all,

My current opinion is -1 for FFE based on the concerns in the spec and 
implementation.

I'm more than happy to realign my stand after we have updated spec and a) it's 
agreed to be the approach as of now and b) we can evaluate how much work the 
implementation needs to meet with the revisited spec.

If we end up to the unfortunate situation that this functionality does not 
merge in time for Liberty, I'm confident that this is one of the first things 
in Mitaka. I really don't think there is too much to go, we just might run out 
of time.

Thanks for your patience and endless effort to get this done.

Best,
Erno

> -Original Message-
> From: Bhandaru, Malini K [mailto:malini.k.bhand...@intel.com]
> Sent: Thursday, September 03, 2015 10:10 AM
> To: Flavio Percoco; OpenStack Development Mailing List (not for usage
> questions)
> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal
> 
> Flavio, first thing in the morning Kent will upload a new BP that addresses 
> the
> comments. We would very much appreciate a +1 on the FFE.
> 
> Regards
> Malini
> 
> 
> 
> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: Thursday, September 03, 2015 1:52 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal
> 
> On 02/09/15 22:11 -0400, Nikhil Komawar wrote:
> >Hi,
> >
> >I wanted to propose 'Single disk image OVA import' [1] feature proposal
> >for exception. This looks like a decently safe proposal that should be
> >able to adjust in the extended time period of Liberty. It has been
> >discussed at the Vancouver summit during a work session and the
> >proposal has been trimmed down as per the suggestions then; has been
> >overall accepted by those present during the discussions (barring a few
> >changes needed on the spec itself). It being a addition to already
> >existing import task, doesn't involve API change or change to any of
> >the core Image functionality as of now.
> >
> >Please give your vote: +1 or -1 .
> >
> >[1] https://review.openstack.org/#/c/194868/
> 
> I'd like to see support for OVF being, finally, implemented in Glance.
> Unfortunately, I think there are too many open questions in the spec right
> now to make this FFE worthy.
> 
> Could those questions be answered to before the EOW?
> 
> With those questions answered, we'll be able to provide a more, realistic,
> vote.
> 
> Also, I'd like us to evaluate how mature the implementation[0] is and the
> likelihood of it addressing the concerns/comments in time.
> 
> For now, it's a -1 from me.
> 
> Thanks all for working on this, this has been a long time requested format to
> have in Glance.
> Flavio
> 
> [0] https://review.openstack.org/#/c/214810/
> 
> 
> --
> @flaper87
> Flavio Percoco
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][release] oslo freeze this week?

2015-08-24 Thread Kuvaja, Erno
 -Original Message-
 From: Thierry Carrez [mailto:thie...@openstack.org]
 Sent: Monday, August 24, 2015 1:47 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [oslo][release] oslo freeze this week?
 
 On Aug 23, 2015, at 5:51 PM, Robert Collins robe...@robertcollins.net
 wrote:
  On 24 August 2015 at 09:28, Doug Hellmann d...@doughellmann.com
 wrote:
  I have marked on my version of the release schedule that we will have
 the Oslo libraries frozen this week. Are we still planning to do that? We
 should figure out what that means as far as creating stable branches and
 version caps and all of those things that caused us so much trouble last 
 cycle.
 
  We're not capping anything. We're depending on constraints to carry us
  forward. The constraints for tox stuff works but isn't widely
  deployed: it is partly waiting on a governance change... I think we
  should use this as a forcing function for projects to opt-in to that.
  grenade uses constraints so only stable branches should be affected by
  that.
 
 Back in YVR we had the following process drafted on a whiteboard:
 
 1. Enable master-stable cross-check
 2. Release Oslo, make stable branches for Oslo
 2.1 Converge constraints
 3. liberty-3 / FF / soft requirements freeze 4. hard requirements freeze 5.
 RC1 / make stable branches for services 6. Branch requirements, disable
 cross-check 7. Unfreeze requirements
 
 Is there anything new that makes this proposed process invalid ?
 
 If not, since (3) is Thursday next week, that means we need to cover the first
 3 items in the coming week ?
 
 --
 Thierry Carrez (ttx)
 

That does sound absolutely reasonable.

I'd like to add/ask where in there we have the last point to cut rest of the 
libs? That should probably happen by 3. As well?

- Erno

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] [infra] How to auto-generate stable release notes

2015-08-21 Thread Kuvaja, Erno
 -Original Message-
 From: Dave Walker [mailto:em...@daviey.com]
 Sent: 21 August 2015 12:43
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [stable] [infra] How to auto-generate stable
 release notes
 
 On 21 August 2015 at 11:38, Thierry Carrez thie...@openstack.org wrote:
 SNIP
  Since then, replying to another concern about common downstream
  reference points, we moved to tagging everything, then replying to
  Clark's pollution remark, to tag from time to time. That doesn't
  remove the need to *conveniently* ship the best release notes we can
  with every commit. Including them in every code tarball (and relying
  on well-known python sdist commands to generate them for those
  consuming the git tree directly) sounded like the most accessible way
  to do it, which the related thread on the Ops ML confirmed. But then
  I'm (and maybe they are) still open to alternative suggestions...
 
 This is probably a good entry point for my ACTION item from the cross-
 project meeting:
 
 I disagree that time to time tagging makes sense in what we are trying to
 achieve.  I believe we are in agreement that we want to move way from co-
 ordinated releases and treat each commit as an accessible release.
 Therefore, tagging each project at arbitrary times introduces snowflake
 releases, rather than the importance being on each commit being a release.
 
 I agree that this would take away the 'co-ordinated' part of the release, but
 still requires release management of each project (unless the time to time
 is automated), which we are not sure that each project will commit to.
 
 If we are treating each commit to be a release, maybe we should just bite
 the bullet and enlarge the ref tag length.  I've not done a comparison of what
 this would look like, but I believe it to be rare that people look at the list
 anyway.  Throwing in a | grep -v ^$RELEASE*, and it becomes as usable as
 before.  We could also expunge the tags after the release is no longer
 supported by upstream.
 
 In my mind, we are then truly treating each commit as a release AND we
 benefit from not needing hacky tooling to fake this.
 
 --
 Kind Regards,
 Dave Walker
 

I do not like about the time to time tagging either, but I don't think it's 
totally horrible situation. Lets say we tag every even week Wednesday and in 
event of OSSA.

The big problem with every commit being release in stable is that lots of 
tooling around git really doesn't care if the reference is branch or tag in 
branch X. Say I can't remember how I named my branch I'm working on and I do 
`git checkout tabtab` there is difference if that list suddenly is on 
hundreds rather than dozens. So yes some level of deprecation period to clean 
those old tags would be great at the point we stop support for certain branches.

I do realize that I'm not the git guru, so if there is really simple way to 
configure that, please let me know and ignore the above. ;)

- Erno
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] [infra] How to auto-generate stable release notes

2015-08-19 Thread Kuvaja, Erno
 -Original Message-
 From: Robert Collins [mailto:robe...@robertcollins.net]
 Sent: Wednesday, August 19, 2015 11:38 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [stable] [infra] How to auto-generate stable
 release notes
 
 On 19 August 2015 at 21:19, Thierry Carrez thie...@openstack.org wrote:
  Robert Collins wrote:
  [...]
  Proposed data structure:
  - create a top level directory in each repo called release-notes
  - within that create a subdirectory called changes.
  - within the release-notes dir we place yaml files containing the
  release note inputs.
  - within the 'changes' subdirectory, the name of the yaml file will
  be the gerrit change id in a canonical form.
 E.g. I1234abcd.yaml
 This serves two purposes: it guarantees file name uniqueness (no
  merge conflicts) and lets us
 determine which release to group it in (the most recent one, in
  case of merge+revert+merge patterns).

We try to have python-glanceclient and glance_store including release notes 
upon the release time. We use in tree doc/source/index.rst for ease of access. 
This provides our release notes through: 
docs.openstack.org/developer/python-glanceclient/ and you can easily follow up 
stable branches via git: 
https://github.com/openstack/python-glanceclient/blob/stable/kilo/doc/source/index.rst

I've been trying to push mentality in to our community that last thing before 
release, we merge release notes update and tag that. What comes to stable, I 
think it's worth of adding release notes in the backport workflow.
 
  One small issue I see with using changeid in the filename is that it
  prevents people from easily proposing a backport with a release note
  snippet in them (since they can't predict the changeID). They will
  have to get it generated and then amend their commit.
 
 Backports typically use the original changeID - they will if they use git 
 cherry-
 pick.
 
  I think we need to enforce some more structure. Release notes are
  easier to read if you group them by category. For stable branches you
  should put critical upgrade notes first, then security updates,
  then random release notes. For master branch notes we usually have
  critical upgrade notes, then major features, then known issues,
  then random release notes. So I would encourage a slightly more
  detailed schema with categories to keep consistency and readability.

Strong maybe, pointless in stable if our aim is to have each commit being 
release or if we anyways have one or two changes per topic.

This would make sense on initial release from master excluding libs and clients 
which tends to have less changes per release anyways.
 
 Sure - please fill it in :). I was winging it, since I don't do release 
 notes, I had
 no idea of your needs.
 
  Processing:
  1) determine the revisions we need to generate release notes for. By
  default generate notes for the current minor release. (E.g. if the
  tree version is 1.2.3.dev4 we would generate release notes for 1.2.0,
  1.2.1, 1.2.2, 1.2.3[which dev4 is leading up to]).
 
  How would that work in a post-versioned world ? What would you
  generate if the tree version is 1.2.3.post12 ?
 
 1.2.3 is still the version, not that we can use post versions at all with pbr.
 (Short story - we can't because we used them wrongly and we haven't had
 nearly enough time to flush out remaining instances in the wild).
 
  2) For each version: scan all the commits to determine gerrit change-id's.
   i) read in all those change ids .yaml files and pull out any notes within
 them.
   ii) read in any full version yaml file (and merge in its contained
  notes)
   iii) Construct a markdown document as follows:
a) Sort any preludes (there should be only one at most, but lets
  not error if there are multiple)
b) sort any notes
 
  We would sort them by category.
 
 The requirement for deterministic results means we'd just sort them.
 If they are divided into categories, we'd sort the list of categories (perhaps
 according to some schema) and then within each category sort the notes.
 
c) for each note transform it into a bullet point by prepending its
  first line with '- ' and every subsequent line with '  ' (two spaces
  to form a hanging indent).
d) strip any trailing \n's from everything.
e) join the result - '\n'.join(chain(preludes, notes))
   iv) we output the resulting file to release-notes/$version.md where
  $version is the pbr reported version for the tree (e.g. in the
  example above it would be 1.2.3.dev4, *not* 1.2.3).
 
  So you would have release-notes/1.2.2.yaml and release-notes/1.2.2.md
  in the same directory, with one being a subset of the data present in
  the other ? That feels a bit confusing. I would rather use two
  separate directories (source and output) for that.
 
 If you like, sure. Though the thing I was thinking was that for very old 
 things
 we might generate the md file, delete the yaml, 

Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and glance

2015-08-13 Thread Kuvaja, Erno


 -Original Message-
 From: Mike Perez [mailto:thin...@gmail.com]
 Sent: Wednesday, August 12, 2015 4:45 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and
 glance
 
 On Wed, Aug 12, 2015 at 2:23 AM, Kuvaja, Erno kuv...@hp.com wrote:
  -Original Message-
  From: Mike Perez [mailto:thin...@gmail.com]
  Sent: 11 August 2015 19:04
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store
  and glance
 
  On 15:06 Aug 11, Kuvaja, Erno wrote:
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
 
  snip
 
Having the image cache local to the compute nodes themselves
gives the best performance overall, and with glance_store, means
that glance-api isn't needed at all, and Glance can become just a
metadata repository, which would be awesome, IMHO.
  
   You have any figures to back this up in scale? We've heard similar
   claims for quite a while and as soon as people starts to actually
   look into how the environments behaves, they quite quickly turn
   back. As you're not the first one, I'd like to make the same
   request as to everyone before, show your data to back this claim
   up! Until that it is just like you say it is, opinion.;)
 
  The claims I make with Cinder doing caching on its own versus just
  using Glance with rally with an 8G image:
 
  Creating/deleting 50 volumes w/ Cinder image cache: 324 seconds
  Creating/delete 50 volumes w/o Cinder image cache: 3952 seconds
 
  http://thing.ee/x/cache_results/
 
  Thanks to Patrick East for pulling these results together.
 
  Keep in mind, this is using a block storage backend that is
  completely separate from the OpenStack nodes. It's *not* using a
  local LVM all in one OpenStack contraption. This is important because
  even if you have Glance caching enabled, and there was no cache miss,
  you still have to dd the bits to the block device, which is still
  going over the network. Unless Glance is going to cache on the storage
 array itself, forget about it.
 
  Glance should be focusing on other issues, rather than trying to make
  copying image bits over the network and dd'ing to a block device faster.
 
  --
  Mike Perez
 
  Thanks Mike,
 
  So without cinder cache your times averaged roughly 150+second marks.
  The couple of first volumes with the cache took roughly 170+seconds.
  What the data does not tell, was cinder pulling the images directly
  from glance backend rather than through glance on either of these cases?
 
 Oh but I did, and that's the beauty of this, the files marked cinder-cache-
 x.html are avoiding Glance as soon as it can, using the Cinder generic image
 cache solution [1]. Please reread my when I say Glance is unable to do
 caching in a storage array, so we don't rely on Glance. It's too slow 
 otherwise.
 
 Take this example with 50 volumes created from image with Cinder's image
 cache
 [2]:
 
 * Is using Glance cache (oh no cache miss)
 * Downloads the image from whatever glance store
 * dd's the bits to the exported block device.
 * the bits travel to the storage array that the block device was exported
 from.
 * [2nd-50th] request of that same image comes, Cinder instead just
 references
   a cinder:// endpoint which has the storage array do a copy on write. ZERO
   COPYING since we can clone the image. Just a reference pointer and done,
 move
   on.
 
  Somehow you need to seed those caches and that seeding
 time/mechanism
  is where the debate seems to be. Can you afford keeping every image in
  cache so that they are all local or if you need to pull the image to
  seed your cache how much you will benefit that your 100 cinder nodes
  are pulling it directly from backend X versus glance caching/sitting
  in between. How block storage backend handles that 100 concurrent
  reads by different client when you are seeding it between different
  arrays? The scale starts matter here because it makes a lot of
  difference on backend if it's couple of cinder or nova nodes
  requesting the image vs. 100s of them. Lots of backends tends to not
  like such loads or we outperform them due not having to fight for the
 bandwidth with other consumers of that backend.
 
 Are you seriously asking if a backend is going to be with stand concurrent
 reads compared to Glance cache?
 
 All storage backends do is I/O, unlike Glance which is trying to do a million
 things and just pissing off the community.

Thanks, I'm so happy to hear that it's not just couple of us who thinks that 
the project is lacking focus.
 
 They do it pretty darn well and are a lot more sophisticated than Glance
 cache.
 I'd pick Ceph w/ Cinder generic image cache doing copy on writes over
 Glance cache any day.
 
 As it stands Cinder will be recommending in documentation for users to use
 the generic image cache solution over Glance Cache

Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and glance

2015-08-12 Thread Kuvaja, Erno
 -Original Message-
 From: Mike Perez [mailto:thin...@gmail.com]
 Sent: 11 August 2015 19:04
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and
 glance
 
 On 15:06 Aug 11, Kuvaja, Erno wrote:
   -Original Message-
   From: Jay Pipes [mailto:jaypi...@gmail.com]
 
 snip
 
   Having the image cache local to the compute nodes themselves gives
   the best performance overall, and with glance_store, means that
   glance-api isn't needed at all, and Glance can become just a
   metadata repository, which would be awesome, IMHO.
 
  You have any figures to back this up in scale? We've heard similar
  claims for quite a while and as soon as people starts to actually look
  into how the environments behaves, they quite quickly turn back. As
  you're not the first one, I'd like to make the same request as to
  everyone before, show your data to back this claim up! Until that it
  is just like you say it is, opinion.;)
 
 The claims I make with Cinder doing caching on its own versus just using
 Glance with rally with an 8G image:
 
 Creating/deleting 50 volumes w/ Cinder image cache: 324 seconds
 Creating/delete 50 volumes w/o Cinder image cache: 3952 seconds
 
 http://thing.ee/x/cache_results/
 
 Thanks to Patrick East for pulling these results together.
 
 Keep in mind, this is using a block storage backend that is completely
 separate from the OpenStack nodes. It's *not* using a local LVM all in one
 OpenStack contraption. This is important because even if you have Glance
 caching enabled, and there was no cache miss, you still have to dd the bits to
 the block device, which is still going over the network. Unless Glance is 
 going
 to cache on the storage array itself, forget about it.
 
 Glance should be focusing on other issues, rather than trying to make
 copying image bits over the network and dd'ing to a block device faster.
 
 --
 Mike Perez
 
Thanks Mike,

So without cinder cache your times averaged roughly 150+second marks. The 
couple of first volumes with the cache took roughly 170+seconds. What the data 
does not tell, was cinder pulling the images directly from glance backend 
rather than through glance on either of these cases?

Somehow you need to seed those caches and that seeding time/mechanism is where 
the debate seems to be. Can you afford keeping every image in cache so that 
they are all local or if you need to pull the image to seed your cache how much 
you will benefit that your 100 cinder nodes are pulling it directly from 
backend X versus glance caching/sitting in between. How block storage backend 
handles that 100 concurrent reads by different client when you are seeding it 
between different arrays? The scale starts matter here because it makes a lot 
of difference on backend if it's couple of cinder or nova nodes requesting the 
image vs. 100s of them. Lots of backends tends to not like such loads or we 
outperform them due not having to fight for the bandwidth with other consumers 
of that backend.

That dd part we gladly leave to you, the network takes what it takes to 
transfer and we will be happily handing the bits over at the other end still, 
so you have something to dd. That is our business and we do it pretty well. 

- Erno
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and glance

2015-08-11 Thread Kuvaja, Erno
 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: Tuesday, August 11, 2015 3:10 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and
 glance
 
 On 08/11/2015 09:42 AM, Brian Rosmaita wrote:
  On 8/7/15, 1:07 PM, Jay Pipes jaypi...@gmail.com wrote:
 
  So, here's the crux of the issue. Nova and Cinder **do not want to
  speak the Glance REST API** to either upload or download image bits
  from storage. Streaming image bits through the Glance API endpoint is
  a needless and inefficient step, and Nova and Cinder would like to
  communicate directly with the backend storage systems.
 
  Exactly why do you want to communicate directly with the backend
  storage systems?  Streaming image bits through Glance appears to be
  needless and inefficient, but if an end-user is booting 1K instances
  from some custom image, Glance's image cache makes an enormous
 difference in delivery time.
 
 Nova's image cache makes a bigger difference overall in those cases, since
 the image will most likely be cached on compute nodes themselves and
 won't need to be copied at all.

If that would be the case we wouldn't have this problem. If nova image caching 
solves all those boot time issues it probably doesn't matter that the image 
comes from glance?
 
 Having the image bits streaming through an unrelated endpoint (the Glance
 API server) is just not required if the logic for grabbing the closest image
 from a set of locations is all in the glance_store library and different nova-
 compute daemons can just efficiently grab the image from multiple locations
 if the image is stored in multiple locations in backend storage.

I think the _if_ there is really relevant. So what you are actually asking is 
not to merge glance_store to glance but merge glance to glance_store so you get 
all the acl controls, policies, data integrity promise, etc. to glance_store? 
It's not just where the bits are. We could then access it all just through 
nova's image API. Perhaps provide Nova Images API V2 for cinder and Horizon; V3 
for the future artifacts consumers. Jay, if you miss glance so much, we happily 
take your contribution to the project, you don't need to be ranting there that 
you want it back to part of nova to contribute. ;)
 
 In addition, Glance's image cache, while excellent for improving performance
 of first-pull of images from slower backend storage like Swift, also requires
 the operator to have a bunch of disk space used on the controller nodes that
 run glance-api. In many deployments that I know of, the controller nodes do
 not run stateful services (DB and MQ are on separate node clusters entirely
 from nova-api, glance-api, cinder-api, etc), and because of this, don't have a
 large root disk (sometimes nothing more than a small SSD for the OS kernel
 and some swap). Setting up Glance's image cache on these types of nodes
 means you need to be careful not to run out of local disk space, since a 
 single
 popular Windows image can easily be 20-40+ GB. In addition to that, each
 glance-api server is going to have its own image cache, not all with the same
 images in them, since different requests will be routed to different glance-
 api servers, and each image cache is its own LRU layout.

This explains a lot.
 
 Having the image cache local to the compute nodes themselves gives the
 best performance overall, and with glance_store, means that glance-api isn't
 needed at all, and Glance can become just a metadata repository, which
 would be awesome, IMHO.

You have any figures to back this up in scale? We've heard similar claims for 
quite a while and as soon as people starts to actually look into how the 
environments behaves, they quite quickly turn back. As you're not the first 
one, I'd like to make the same request as to everyone before, show your data to 
back this claim up! Until that it is just like you say it is, opinion.;)

- Erno
 
 Best,
 -jay
 
  So I'm curious about what exactly the use cases for direct backend
  storage communication are, and why Glance can't meet them.
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Stable][Nova] VMware NSXv Support

2015-08-10 Thread Kuvaja, Erno
 -Original Message-
 From: Gary Kotton [mailto:gkot...@vmware.com]
 Sent: Monday, August 10, 2015 4:18 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Stable][Nova] VMware NSXv Support
 
 
 
 On 8/10/15, 6:05 PM, Gary Kotton gkot...@vmware.com wrote:
 
 
 
 On 8/10/15, 6:03 PM, Gary Kotton gkot...@vmware.com wrote:
 
 
 
 On 8/10/15, 5:46 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
 wrote:
 
 
 
 On 8/10/2015 9:17 AM, Gary Kotton wrote:
  Hi,
  I am not really sure what to say here. The code was in review for
 over
 8
  months. On a side note but related - we have a patch for a plugin
 developed in Liberty - https://review.openstack.org/#/c/165750/.
 This has  been in review since March. I really hope that that lands
 in Liberty.
 If
  not we will go through the same thing again.
  Working in Nova on code that is self contained within a driver is
 difficult - terribly difficult. Not only is this demotivating, it
 also  effectively does not help any of the drivers actually add any
 features.
  A sad day for OpenStack.
  Thanks
  Gary
 
  On 8/5/15, 4:01 PM, Ihar Hrachyshka ihrac...@redhat.com wrote:
 
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA256
 
  Hi,
 
  I think Erno made a valid point here. If that would touch only
 vmware  code, that could be an option to consider. But it looks
 like both  patches are very invasive, and they are not just
 enabling features  that are already in the tree, but introduce new
 stuff that is not even  tested for long in master.
 
  I guess we'll need to wait for those till Liberty. Unless
 nova-core-maint has a different opinion and good arguments to
 approach  the merge.
 
  Ihar
 
  On 08/05/2015 12:37 PM, Kuvaja, Erno wrote:
  Hi Gary,
 
 
 
  While I do understand the interest to get this functionality
  included, I really fail to see how it would comply with the
  Stable Branch Policy:
  https://wiki.openstack.org/wiki/StableBranch#Stable_branch_policy
 
  Obviously the last say is on stable-maint-core, but normally new
  features are really no-no to stable branches.
 
 
 
  My concerns are more on the metadata side of your changes.
 
  Even the refactoring is fairly clean it is major part of the
  metadata handler.
 
  It also changes the API (In the case of X-Metadata-Provider being
  present) which tends to be sacred on stable branches.
 
 
 
  The changes here does not actually fix any bug but just
  implements new functionality that missed kilo not even slightly but
 by months.
  Thus my -1 for merging these.
 
 
 
  -  Erno
 
 
 
  *From:*Gary Kotton [mailto:gkot...@vmware.com] *Sent:*
 Wednesday,
  August 05, 2015 8:03 AM *To:* OpenStack List *Subject:*
  [openstack-dev] [Stable][Nova] VMware NSXv Support
 
 
 
  Hi,
 
  In the Kilo cycle a Neutron driver was added for supporting the
  Vmware NSXv plugin. This required patches in Nova to enable the
  plugin to work with Nova. These patches finally landed yesterday.
  I have back ported them to stable/kilo as the Neutron driver is
  unable to work without these in stable/kilo. The patches can be
  found at:
 
  1. VNIC support - https://review.openstack.org/209372 2. Metadata
  support - https://review.openstack.org/209374
 
  I hope that the stable team can take this into consideration.
 
 
 
  Thanks in advance
 
  Gary
 
 
 
 
 
 __
 ___
 _
  
 
 
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v2
 
 
 iQEcBAEBCAAGBQJVwgkjAAoJEC5aWaUY1u57NacIALsJ8oo6eJKqJIidBSFzwxv
 g
 
 zqJXHE56Lpg62/afRF94B2edfhm791Mz42LTFn0BHHRjV51TQX4k/Jf3Wr22CEv
 m
 
 zFZkU5eVMVOSL3GGnOZqSv/T06gBWmlMVodmSKQjGxrIL1s8G1m4aTwe6P
 qs+lie
 
 N+cT0pZbcjL/P1wYTac6XMpF226gO1owUjhE4oj9VZzx7kEqNsv22SIzVN2fQcc
 o
 
 YLs/LEcabMhuuV4Amde3RqUr0BkB+mlIX1TUv5/FTXT/F4ZwzYS/DBH9MaBJ5t
 8n
 
 hgCTJzCeg598+irgOt3VJ3Jn3Unljz6LNzKIM8RnBG0o51fp8vfE/mODQQaUKOg
 =
  =ZYP8
  -END PGP SIGNATURE-
 
 
 _
 __
 ___
 _
 _
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
 __
 ___
 _
 _
 _
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 https://review.openstack.org/#/c/165750/ is a feature add but it's
 not targeted against a blueprint, so it's just running as a random
 thing outside any tracking mechanism for features (launchpad).
 
 Salvatore made some comments back in March

Re: [openstack-dev] [Glance][Nova][Cinder] glance_store and glance

2015-08-07 Thread Kuvaja, Erno
Hi,

Flagged Nova and Cinder into this discussion as they were the first intended 
adopters iirc.

I don't have big religious view about this topic. I wasn't huge fan of the idea 
separating it in the first place and I'm not huge fan of keeping it separate 
either.

After couple of cycles we have so far witnessed only the downside of 
glance_store being on it's own. We break even our own gate with our own lib 
releases, we have one extra bug tracker to look after and even not huge but it 
just increases the load on the release and stable teams as well.

In my understanding the interest within Nova to consume glance_store directly 
has pretty much died off since we separated it, please do correct me if I'm 
wrong.
I haven't heard anyone expressing any interest to consume glance_store directly 
within Cinder either.
So far I have failed to see use-case for glance_store alone apart from Glance 
API Server and the original intended use-cases/consumers have either not 
expressed interest what so ever or directly expressed being not interested.

Do we have any reason what so ever keeping doing the extra work to keep these 
two components separate? I'm more than happy to do so or at least extend this 
discussion for a cycle if there is projects out there planning to utilize it. I 
don't want to be in middle of separating it again next cycle because someone 
wanted to consume and forked out the old tree because we decided to kill it but 
I'm not keen to take the overhead of it either without reason.

- Erno

 -Original Message-
 From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
 Sent: Friday, August 07, 2015 6:21 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Glance] glance_store and glance
 
 Hi,
 
 During the mid-cycle we had another proposal that wanted to put back the
 glance_store library back into the Glance repo and not leave it is as a
 separate repo/project.
 
 The questions outstanding are: what are the use cases that want it as a
 separate library?
 
 The original use cases that supported a separate lib have not had much
 progress or adoption yet. There have been complaints about overhead of
 maintaining it as a separate lib and version tracking without much gain.
 The proposals for the re-factor of the library is also a worrysome topic in
 terms of the stability of the codebase.
 
 The original use cases from my memory are:
 1. Other projects consuming glance_store -- this has become less likely to be
 useful.
 2. another upload path for users for the convenience of tasks -- not
 preferable as we don't want to expose this library to users.
 3. ease of addition of newer drivers for the developers -- drivers are only
 being removed since.
 4. cleaner api / more methods that support backend store capabilities - a
 separate library is not necessarily needed, smoother re-factor is possible
 within Glance codebase.
 
 Also, the authN/Z complexities and ACL restrictions on the back-end stores
 can be potential security loopholes with the library and Glance evolution
 separately.
 
 In order to move forward smoothly on this topic in Liberty, I hereby request
 input from all concerned developer parties. The decision to keep this as a
 separate library will remain in effect if we do not come to resolution within 
 2
 weeks from now. However, if there aren't any significant use cases we may
 consider a port back of the same.
 
 Please find some corresponding discussion from the latest Glance weekly
 meeting:
 http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-08-06-
 14.03.log.html#l-21
 
 --
 
 Thanks,
 Nikhil
 
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Stable][Nova] VMware NSXv Support

2015-08-05 Thread Kuvaja, Erno
Hi Gary,

While I do understand the interest to get this functionality included, I really 
fail to see how it would comply with the Stable Branch Policy: 
https://wiki.openstack.org/wiki/StableBranch#Stable_branch_policy
Obviously the last say is on stable-maint-core, but normally new features are 
really no-no to stable branches.

My concerns are more on the metadata side of your changes.
Even the refactoring is fairly clean it is major part of the metadata handler.
It also changes the API (In the case of X-Metadata-Provider being present) 
which tends to be sacred on stable branches.

The changes here does not actually fix any bug but just implements new 
functionality that missed kilo not even slightly but by months. Thus my -1 for 
merging these.


-  Erno

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Wednesday, August 05, 2015 8:03 AM
To: OpenStack List
Subject: [openstack-dev] [Stable][Nova] VMware NSXv Support

Hi,
In the Kilo cycle a Neutron driver was added for supporting the Vmware NSXv 
plugin. This required patches in Nova to enable the plugin to work with Nova. 
These patches finally landed yesterday. I have back ported them to stable/kilo 
as the Neutron driver is unable to work without these in stable/kilo. The 
patches can be found at:

  1.  VNIC support - https://review.openstack.org/209372
  2.  Metadata support - https://review.openstack.org/209374
I hope that the stable team can take this into consideration.

Thanks in advance
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Removing python-swiftclient from requirements.txt

2015-07-28 Thread Kuvaja, Erno
I do agree, we don't depend or are cleaning the other clients out of the glance 
dependencies as well and I think swift should not be there either.

The default store is filesystem store and if something is depending on the 
actual store clients it should be either glance_store or deployer (well for 
example our gate) glance itself should not have hard dependencies for 'em.


-  Erno

From: William M Edmonds [mailto:edmon...@us.ibm.com]
Sent: Monday, July 27, 2015 10:42 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [glance] Removing python-swiftclient from 
requirements.txt


python-swiftclient is only needed by operators that are using the swift 
backend, so it really doesn't belong in requirements.txt. Listing it in 
requirements forces all operators to install it, even if they're not going to 
use the swift backend. When I proposed a change [1] to move this from 
requirements to test-requirements (would still be needed there because of tests 
using the swift backend), others raised concerns about the impact this could 
have on operators who use the swift backend today and would be upgrading to 
Liberty. I believe everyone agreed this should not be in requirements, but the 
fact is that it has been, so operators may have (incorrectly) been depending on 
that during upgrades. If we remove it in Liberty, and there are changes in 
Liberty that require a newer version of swiftclient, how would those operators 
know that they need to upgrade swiftclient?

The optional dependencies spec [2] could definitely help here. I don't think we 
should have to wait for that, though. This is an issue we obviously already 
have today for other backends, which indicates folks can deal with it without 
an optional dependencies implementation.

This would be a new concern for operators using the default swift backend, 
though. So how do we get the message out to those operators? And do we need to 
put out a message about this change in Liberty and then wait until Mitaka to 
actually remove this, or can we go ahead and remove in Liberty?

[1] https://review.openstack.org/#/c/203242
[2] 
http://specs.openstack.org/openstack/oslo-specs/specs/liberty/optional-deps.html

-Matthew
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][stable] Stable exception for bug #1447215: Allow ramdisk/kernel_id to be None

2015-07-28 Thread Kuvaja, Erno
 -Original Message-
 From: Flavio Percoco [mailto:fla...@redhat.com]
 Sent: Tuesday, July 28, 2015 3:15 PM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [nova][glance][stable] Stable exception for bug
 #1447215: Allow ramdisk/kernel_id to be None
 
 Greetings,
 
 We recently found a bug in the Nova-Glance interaction that prevents
 booting snapshots using Glance's V2 API. The bug is that Nova creates the
 snapshot and sets the ramdisk_id/kernel_id to None when it's not available.
 
 While this was ok in V1, it causes a failure for V2 since the current schema-
 properties file doesn't allow both fields to be None. The right fix would be 
 for
 Nova not to set these fields at all if no value is found.
 
 Nonetheless, we have a workaround that would make this work. The
 workaround landed in master and it's been proposed for kilo.
 Therefore, I'm asking for a stable exception to merge this patch, which is
 backwards compatible (unless I'm missing something). The exception is being
 requested because it does change the API.

+1

In my understanding as well we would be backwards compatible and this would 
make the future upgrades so much easier in the case nova starts consuming v2 
Image API.

- Erno

 
 The change proposed is to allow these fields to be None.
 
 The review: https://review.openstack.org/#/c/205432/
 
 Cheers,
 Flavio
 
 --
 @flaper87
 Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Storing Heat Templates on Glance Artifact Repo on Kilo

2015-07-27 Thread Kuvaja, Erno
Hi Thiago,

As it is now, we do not support storing anything else than images in Glance. 
Obviously nothing really permits it either as we do not check the images 
uploaded being images as their content anyways.

You might want to look into the work that is ongoing around Images API v.3 also 
known as Artifacts. The planned implementation would most probably address your 
use case even it does not initially provide images at all ((V1)V2V3 can and 
probably needs to be running concurrently to address all the needs).


-  Erno

From: Martinx - ジェームズ [mailto:thiagocmarti...@gmail.com]
Sent: Friday, July 24, 2015 11:04 PM
To: openstack@lists.openstack.org
Subject: [Openstack] Storing Heat Templates on Glance Artifact Repo on Kilo

Guys,

 I have a bunch of Heat Templates and I would like to know if it is possible to 
store those templates on Glance.

 Is it possible?

 If yes, how?

 I'm using OpenStack Kilo on top of Ubuntu Trusty (using Ubuntu Cloud Archive).

Thanks!
Thiago
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [glance][api] Response when a illegal body is sent

2015-07-27 Thread Kuvaja, Erno
 -Original Message-
 From: Ian Cordasco [mailto:ian.corda...@rackspace.com]
 Sent: Friday, July 24, 2015 4:58 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [glance][api] Response when a illegal body is
 sent
 
 
 
 On 7/23/15, 19:38, michael mccune m...@redhat.com wrote:
 
 On 07/23/2015 12:43 PM, Ryan Brown wrote:
  On 07/23/2015 12:13 PM, Jay Pipes wrote:
  On 07/23/2015 10:53 AM, Bunting, Niall wrote:
  Hi,
 
  Currently when a body is passed to an API operation that explicitly
  does not allow bodies Glance throws a 500.
 
  Such as in this bug report:
  https://bugs.launchpad.net/glance/+bug/1475647 This is an example
  of a GET however this also applies to other requests.
 
  What should Glance do rather than throwing a 500, should it return
  a
  400 as the user provided an illegal body
 
  Yep, this.
 
  +1, this should be a 400. It would also be acceptable (though less
  preferable) to ignore any body on GET requests and execute the
  request as normal.
 
  Best,
  -jay
 
 i'm also +1 on the 400 band wagon
 
 400 feels right for when Glance is operating without anything in front of it.
 However, let me present a hypothetical situation:
 
 Company X is operating Glance behind a load-balancing proxy. Most users
 talk to Glance behind the LB. If someone writes a quick script to send a GET
 and (for whatever reason) includes a body, they'll get a 200 with the data
 that would otherwise have been sent if they didn't include a body.
 This is because most such proxies will strip the body on a GET (even though
 RFC 7231 allows for bodies on a GET and explicitly refuses to define semantic
 meaning for them). If later that script is updated to work behind the load
 balancer it will be broken, because Glance is choosing to error instead of
 ignoring it.
 
 Note: I'm not arguing that the user is correct in sending a body when there
 shouldn't be one sent, just that we're going to confuse a lot of people with
 this.
 
 I'm also fine with either a 400 or a 200.

I'd be pro 400 series here. Firstly because our Images API v2 documentation 
clearly states This operation does not accept a request body. Under GET 
section of most of our paths: 
http://developer.openstack.org/api-ref-image-v2.html

I do not think we should change that just to facilitate someone who is breaking 
our API and happens to be lucky to have the proxy sanitizing the request in 
between (which IMO is the second wrong in this corner, the proxy should not 
alter the request content in the first place). Based on our API documentation I 
can see 400 series catch being bug fix and I'll be more than happy to throw the 
discussion about changing our APIs accepting body in the get request as a spec 
and object it there.

It's just wrong to send the message that it's ok to send any garbage to us with 
your request and consume the extra resources by doing so.

- Erno
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Log] Log Working Group weekly meeting

2015-07-23 Thread Kuvaja, Erno
Hi all,

Log Working Group has been running weekly meetings Wednesdays at 20:00 UTC. 
There were queries while back to adjust the meeting time to facilitate EMEA/APJ 
folks bit better.

Current regular participants are fine with the time we're having the meeting 
now, but we wanted to probe the community to see if there are people who would 
like to participate but the current time is absolutely not viable for them. So 
please share the message and let us know if earlier time slot would be 
something that would bring more activity to this working group. At the moment 
the plan is to keep our current meeting time and if there is need, move to 
bi-weekly rotation of early and late slots.

We will bring this topic back on the table on next Wednesday's meeting.

Thanks,
Erno (jokke_) Kuvaja
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable]{kilo][glance] freeze exceptions

2015-07-21 Thread Kuvaja, Erno
Hi all,

We have been waiting python-cinderclient stable/kilo release for couple of 
weeks to be able to merge glance_store stable/kilo backports. Namigly:
https://review.openstack.org/#/q/status:open+project:openstack/glance_store+branch:stable/kilo,n,z

As Alan blocked them all, I'd like to ask everyone hold your horses with the 
2015.1.1 release until cinder gets their client released so we can fix the 
glance store for the release.

Thanks,
Erno
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Progress of the Python 3 port

2015-07-15 Thread Kuvaja, Erno
Victor,

That was related to stable/kilo branch as stated on the line above:

14:39:58 nikhil_k jokke_: was that you on python-glanceclient stable/kilo
14:39:59 jokke_ also queued up bunch of stable stuff ... glance_store gotta 
wait until Cinder gets their client requirements fixed, but glanceclient 
backports would need some love

;)
- Erno

 -Original Message-
 From: Victor Stinner [mailto:vstin...@redhat.com]
 Sent: Wednesday, July 15, 2015 9:39 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [glance] Progress of the Python 3 port
 
 Hi,
 
 On 09/07/2015 16:29, Ian Cordasco wrote:
  Thanks for your hard work Victor! I'll bring up a new release today in
  the Glance meeting (currently running) and see if Nik and I can bug
  the release managers for a new release today.
 
 Any update on this release?
 
 In the meeting report, I read:
 14:39:59 jokke_ also queued up bunch of stable stuff ... glance_store
 gotta wait until Cinder gets their client requirements fixed, but glanceclient
 backports would need some love
 
 What is this Cinder client requirements issue? Is it fixed now? The stable
 stuff are required for the new glance_store release?
 
 http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-07-09-
 14.00.log.html
 
 Victor
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] The sorry state of our spec process

2015-07-03 Thread Kuvaja, Erno
First of all, Thanks Flavio for bring this open to the daylight!

I have been really frustrated about the Glance spec process since the beginning 
and as glance core tried to work around the limitations the best I can. I'm not 
sure if the process is similar way broken on the other projects, but I think 
after piloting the process in Glance for couple of cycles we should take some 
action on it.

Few comments inline as that way they are easier to scope.

 -Original Message-
 From: Flavio Percoco [mailto:fla...@redhat.com]
 Sent: Wednesday, July 01, 2015 2:49 PM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [glance] The sorry state of our spec process
 
 Greetings,
 
 We're 1 week through L-2 (or is it 2?, I can't do time) and we, the glance
 project, haven't even merged a single spec. Regardless of the reasons
 behind this situation and the fact that we've been indeed taking steps to
 improve this situation, I think we should put this issue to an end.

This is really sad state to be in. We haven't approved a single spec by the 
time other projects are already freezing their spec repos for L.
 
 There are many issues we've faced in Glance:
 
 1. The glance-drivers team is too small [0] 2. Many specs have been held back
 waiting for code [1] 3. Huge expectations from the spec and very low review
 rate (even from other members of the glance team).

This issue was discussed while ago and was postponed to clarify the Glance Spec 
process. After that this is first initiative to bring the issue back to table 
and I'd like to hear if that process defining work is still blocking the 
expansion to share the workload. If so, could we please get the proposal of 
that workload or at least the parts of it that needs to be ironed out open in 
public so we can move that forward as community?
 
 There was a recent discussion on this m-l[2] about the spec process in Nova
 and while I don't agree with everything that was said there, I do think it
 highlights important problems that we're facing in glance as well.
 
 Therefore, I'd like to propose the following:
 
 1. Expand our drivers team. I thik people in the glance community are getting
 annoyed of reading this requests from me and The Mythical Man-Month
 would agree with them. However, it's really sad that some of our oldest (in
 terms of tenure) contributors that have shown interest in joining the team
 haven't been added yet. I proposed to bring all cores to the drivers team
 already and I still think that's a good thing. Assuming that's something we
 don't want, then I'd like us to find 2 or 3 people willing to volunteer for 
 this
 task.

If this still cannot happen I'd like to get full commitment from our current 
drivers to dedicate the time and effort for speedy workflow on our specs or 
step down and trash the whole spec process.
 
 2. Lets try to get things into the backlog instead of expecting them to be
 perfectly shaped and targeted for this release. Lets let people start from  a
 base, generally agreed, idea so that code can be written and reviews on the
 actual feature can be made. Once the feature is implemented, we can move
 the spec to the release directory. I believe this was also proposed in 
 Nikola's
 thread[2].

I'm huge supporter of this. Specs being part of our normal review workflow 
makes changing them as needed easy and trackable. Why in earth we need to have 
perfect plan and implementation for that plan before we're willing to indicate 
approval for the initial idea?
 
 3. Not all specs need to have 3-month-long discussions to be approved.
 I'm not suggesting to just merge specs that are in poor state but we can't
 always ask for code to understand whether a spec makes sense or not.
 
 Unfortunately, we're already in L-2 and I believe it'll be really hard for 
 some
 of those features to land in Liberty, which is also sad and quite a waste of
 time.

How long we will have people trying to improve the project if any given 
proposed functionality takes cycles and lots of politics to merge.
 
 I don't believe the above is the ultimate solution to this issue but I 
 believe it
 will help. For the next cycle, we really need to organize this process
 differently.

++

- Erno

 
 The email is already long enough so, I hope we'll agree on something and
 finally take some actions.
 
 Cheers,
 Flavio
 
 [0] https://review.openstack.org/#/c/126550/
 [1] https://review.openstack.org/#/admin/groups/342,members
 (Mark Washenberger and Arnaud Legendre are not contributors anymore)
 [2] http://lists.openstack.org/pipermail/openstack-dev/2015-
 June/067861.html
 
 
 --
 @flaper87
 Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][stable] Cinder client broken in Juno

2015-06-23 Thread Kuvaja, Erno
Hi Mike,

We have similar functionality in Glance and I think this is critical enough fix 
to backport to Juno.

Considered for Icehouse, definitely not: 
http://lists.openstack.org/pipermail/openstack-announce/2015-June/000372.html

- Erno (jokke_)

 -Original Message-
 From: Mike Perez [mailto:thin...@gmail.com]
 Sent: 23 June 2015 16:50
 To: OpenStack Development Mailing List
 Cc: Jesse Keating
 Subject: [openstack-dev] [cinder][stable] Cinder client broken in Juno
 
 There was a bug raised [1] from some large deployments that the Cinder
 client 1.2.0 and beyond is not working because of version discovery.
 Unfortunately it's not taking into account of deployments that have a proxy.
 
 Cinder client asks Keystone to find a publicURL based on a version.
 Keystone will gather data from the service catalog and ask Cinder for a list 
 of
 the public endpoints and compare. For the proxy cases, Cinder is giving
 internal URLs back to the proxy and Keystone ends up using that instead of
 the publicURL in the service catalog. As a result, clients usually won't be 
 able
 to use the internal URL and rightfully so.
 
 This is all correctly setup on the deployer's side, this an issue with the 
 server
 side code of Cinder.
 
 There is a patch that allows the deployer to specify a configuration option
 public_endpoint [2] which was introduced in a patch in Kilo [3]. The problem
 though is we can't expect people to already be running Kilo to take
 advantage of this, and it leaves deployers running stable releases of Juno in
 the dark with clients upgrading and using the latest.
 
 Two options:
 
 1) Revert version discovery which was introduced in Kilo for Cinder client.
 
 2) Grant exception on backporting [4] a patch that helps with this problem,
 and introduces a config option that does not change default behavior. I'm
 also not sure if this should be considered for Icehouse.
 
 
 [1] - https://launchpad.net/bugs/1464160
 [2] - http://docs.openstack.org/kilo/config-reference/content/cinder-conf-
 changes-kilo.html
 [3] - https://review.openstack.org/#/c/159374/
 [4] - https://review.openstack.org/#/c/194719/
 
 --
 Mike Perez
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] Proposal for Weekly Glance Drivers meeting.

2015-06-17 Thread Kuvaja, Erno
As this Fri Jun 19th did not seem to have weight, I just express my opinion to 
the mailing list for the records.

Personally I think this is bad idea, but as not being Glance Driver I can't say 
how much need there is for such meeting. The specs should be raised during our 
weekly meeting and/or discussed on the mailing list and gerrit. Having another 
irc meeting just for these discussions (specially at this time) is giving quite 
clear signal that the input from Eastern EMEA  APJ is not needed nor desired.  
Based on the description from Nikhil and the weekly nature of this meeting I 
would assume that the intention was not just have a quick sync between the 
drivers, which I could have understood.

I'd be happy to be told to be wrong on the assumptions above ;)

- Erno

 -Original Message-
 From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
 Sent: 16 June 2015 18:23
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Glance] [all] Proposal for Weekly Glance
 Drivers meeting.
 
 FYI, We will be closing the vote on Friday, June 19 at 1700 UTC.
 
 On 6/15/15 7:41 PM, Nikhil Komawar wrote:
  Hi,
 
  As per the discussion during the last weekly Glance meeting
  (14:51:42at
  http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-06-
 11-
  14.00.log.html ), we will begin a short drivers' meeting where anyone
  can come and get more feedback.
 
  The purpose is to enable those who need multiple drivers in the same
  place; easily co-ordinate, schedule  collaborate on the specs, get
  core-reviewers assigned to their specs etc. This will also enable more
  synchronous style feedback, help with more collaboration as well as
  with dedicated time for giving quality input on the specs. All are
  welcome to attend and attendance from drivers is not mandatory but
 encouraged.
  Initially it would be a 30 min meeting and if need persists we will
  extend the period.
 
  Please vote on the proposed time and date:
  https://review.openstack.org/#/c/192008/ (Note: Run the tests for your
  vote to ensure we are considering feasible  non-conflicting times.)
  We will start the meeting next week unless there are strong conflicts.
 
 
 --
 
 Thanks,
 Nikhil
 
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-08 Thread Kuvaja, Erno
 -Original Message-
 From: Thierry Carrez [mailto:thie...@openstack.org]
 Sent: Friday, June 05, 2015 1:46 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [all] [stable] No longer doing stable point
 releases
 
 So.. summarizing the various options again:
 
 Plan A
 Just drop stable point releases.
 (-) No more release notes
 (-) Lack of reference points to compare installations
 
 Plan B
 Push date-based tags across supported projects from time to time.
 (-) Encourages to continue using same version across the board
 (-) Almost as much work as making proper releases
 
 Plan C
 Let projects randomly tag point releases whenever
 (-) Still a bit costly in terms of herding cats
 
 Plan D
 Drop stable point releases, publish per-commit tarballs
 (-) Requires some infra changes, takes some storage space
 
 Plans B, C and D also require some release note / changelog generation from
 data maintained *within* the repository.
 
 Personally I think the objections raised against plan A are valid. I like 
 plan D,
 since it's more like releasing every commit than not releasing anymore. I
 think it's the most honest trade-off. I could go with plan C, but I think it's
 added work for no additional value to the user.
 
 What would be your preferred option ?
 
 --
 Thierry Carrez (ttx)
 

I don't think plans B  C are any benefit for  anyone based on the statements 
discussed earlier so won't be repeating those. One thing I like about plan D is 
that it would give also indicator how much the stable branch has moved in each 
individual project. 

Yes this can be seen from git, but also having just running number on the 
stable release for each commit would be really quick glimpse opposed to SHAs or 
counting  the changes from git  log.

- Erno

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [log] No Log WG meeting today

2015-06-03 Thread Kuvaja, Erno
Hi all,

We do not have any burning items (or any what so ever that I'm aware of) on the 
agenda today, Rocky is away and I have myself conflicting schedules. Lets 
gather together again next week.

Meeting cancelled today Wed 3rd of June!

Best,
Erno
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] Liberty summit: Updates in Glance

2015-05-27 Thread Kuvaja, Erno
 -Original Message-
 From: Flavio Percoco [mailto:fla...@redhat.com]
 Sent: 27 May 2015 00:58
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Glance] [all] Liberty summit: Updates in Glance
 
 Jesse, you beat me on this one :)
 
 On 26/05/15 13:54 -0400, Nikhil Komawar wrote:
 Thank you Jesse for your valuable input (here and at the summit) as
 well as intent to clarify the discussion.
 
 Just trying to ensure people are aware about the EXPERIMENTAL nature of
 the v3 API and reasons behind it. Please find my responses in-line.
 However, I do want to ensure you all, that we will strive hard to move
 away from the EXPERIMENTAL nature and go with a rock solid
 implementation as and when interest grows in the code-base (that helps
 stabilize it).
 
 On 5/26/15 12:57 PM, Jesse Cook wrote:
 
 
 On 5/22/15, 4:28 PM, Nikhil Komawar nik.koma...@gmail.com
 wrote:
 
 
 Hi all,
 
 tl;dr; Artifacts IS staying in Glance.
 
  1. We had a nice discussion at the contributors' meet-up at the
 Vancouver summit this morning. After weighing in many 
  possibilities
 and evolution of the Glance program, we have decided to go ahead
 with the Artifacts implementation within Glance program under the
 EXPERIMENTAL v3 API.
 
 Want to clarify a bit here. My understanding is: s/Artifacts/v3 API/g. 
  That
 is to say, Artifacts is the technical implementation of the v3 API. This
 also means the v3 API is an objects API vs just an images API.
 
 
 Generic data assets' API would be a nice term along the lines of the
 mission statement. Artifacts seemed fitting as that was the focus of
 discussion at various sessions.
 
 Regardless of how we call it, I do appreciate the clarity on the fact that
 Artifacts - data assests - is just the technical implementation of what will 
 be
 Glance's API v3. It's an important distinction to avoid sending the wrong
 message on what it's going to be done there.
 
 
 We also had some hallway talk about putting the v1 and v2 APIs on top of
 the v3 API. This forces faster adoption, verifies supportability via v1 
  and
 v2 tests, increases supportability of v1 and v2 APIs, and pushes out the
 need to kill v1 API.
 
 Let's discuss more as time and development progresses on that
 possibility. v3 API should stay EXPERIMENTAL for now as that would help
 us understand use-cases across programs as it gets adopted by various
 code-bases. Putting v1/v2 on top of v3 would be tricky for now as we
 may have breaking changes with code being relatively-less stable due to
 narrow review domain.
 
 I actually think we'd benefit more from having V2 on top of V3 than not doing
 it. I'd probably advocate to make this M material rather than L but I think 
 it'd
 be good.

We perhaps would, but that would realistically push v2 adoption across the 
projects to somewhere around O release. Just looking how long it took the v2 
code base to mature enough that we're seriously talking moving to use that in 
production.
 
 I think regardless of what we do, I'd like to kill v1 as it has a sharing 
 model
 that is not secure.

The above would postpone this one somewhere around Q-R (which is btw. not so 
far from U anymore).

More I think about this the more convinced I am about focusing to the move to 
v2 on our consumers, deprecating the v1 out and after that we can start talking 
about moving v2 on top of the v3 codebase if possible, not other way around 
hoping that it would speed up the v3 adoption.

- Erno
 
 Flavio
 
  1.
  2. The effort would primarily be conducted as a sub-team-like
 structure within the program and the co-coordinators and drivers 
  of
 the necessary Artifacts features would be given core-reviewer
 status temporarily with an informal agreement to merge code that 
  is
 only related to Artifacts.
  3. The entire Glance team would give reviews as time and priorities
 permit. The approval (+A/+WorkFlow) of any code within the
 program
 would need to come from core-reviewers who are not temporarily
 authorized. The list of such individuals and updated time-line
 would be documented in phases during the course of Liberty cycle.
  4. We will continue to evaluate  update the governance, maturity of
 the code and future plans for the v1, v2 and v3 Glance APIs as 
  time
 progresses. However, for now we are aiming to integrate all of
 Glance (specifically Images) as Artifacts in the v3 API.
 
 
 As I understand it, that is to say that v3 requests in the first
 “micro-version” that specify the object type as image would get a not
 implemented or similar error. The next next “micro-version” would likely
 contain the support for images along with possibly implementing the v1
 and
 v2 

[openstack-dev] [glance] Call to action, revisit CIS state

2015-04-27 Thread Kuvaja, Erno
Hi all,

As you probably know CIS was expanded from Juno metadefs work this cycle based 
on spec [1] provided. The implementation was merged in quite a rush just before 
feature freeze.

During the spec review [2] for client functionality for CIS it came to our 
attention that the implementation exposes Elasticsearch perhaps too openly via 
it's API (namely the creation of datasets allows API consumer uploading 
arbitrary files via the create request).

Call to action: Please review the CIS functionality again for security threats 
and bring them up so we can form a plan if we need to address those and request 
RC3 before release.

I have couple of major concerns about this workflow:

1)  I was shocked after reading following statement from the client spec 
review discussion: During the Kilo release, we - by we I mean the team 
responsible for implementing the CIS - have discussed such scenario, that 
exposing Elasticsearch capabilities to the user consuming the CIS API can bring 
some serious security impact. This discussion nor the scenario was never 
brought to attention of the wider Glance community. The spec bluntly states 
that there is no security impact from the implementation and the concerns 
should have been brought up so reviewers would have had better chance to catch 
possible threats.

2)  Would like to also address your concern that proposed shape of spec 
allows user to upload arbitrary documents to Elasticsearch (ES is the engine 
used under the hood, we should rather talk about uploading documents to CIS 
service) which are not related to Glance in any way (images  metadefs in 
current implementation). Personally I don't think that discussion about 
IF is a valid topic, because we've already implemented backend for CIS at the 
Glance side and you cannot say A without saying B. As long as the code is 
developed under the Glance project and reviewed by glance-core it's outrageous 
to claim that possible issues are not related to Glance in any way. Discussion 
about if the API is implemented by the spec and fits to the mission statement 
is really valid at this point and needs to be thoroughly discussed. We need to 
find the root cause of this attitude and fix it before it damages the 
relationships within the community in a way that cannot be fixed.

3)  We had two huge pieces of code merging in at the very end of the 
development cycle Artifacts and CIS and the pressure to merge them in 
(unfortunately not review but merge) was high. On the artifacts side we had 
pretty open discussion about the state, the concerns and plans of timelines 
address those concerns. With CIS we unfortunately did not have this openness. 
Was it reflection of 1  2 or something else, I do not know, but I surely would 
like to.

I would like you to look back into those two specs and the comments, look back 
into the implementation and raise any urgent concerns and please lets try to 
have good and healthy base for discussion in the Vancouver Summit how we will 
continue forward from this! As Stable Branch Liaison I would really like to 
know what we (and who that we are) are supporting for next couple of cycles, as 
glance-core I would like to know any concerns about used technology or 
implementation people might have and as Glance community member I'd like to see 
us working together towards these things and definitely not have these we vs. 
them/you discussions anymore. Bluntly if we need to split the team, let's 
do it officially, there is room under big tent for every group who wants to be 
with themselves.

Best Regards,
Erno

[1] 
http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-service.html
[2] https://review.openstack.org/#/c/173718/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Call to action, revisit CIS state

2015-04-27 Thread Kuvaja, Erno
Thanks for your response Kamil,

I think the first lesson learnt (again) is when there is =1 non-native 
speakers involved make sure you understand the message as it's meant to, not as 
it has been written or you apprehend it. I'm happy I did not write what I had 
in my mind at Thu night or Fri morning but slept over the weekend as that 0 to 
v1 happened in no time.

All when revisiting the topic, please leave the edge off and lets ensure we 
have great Kilo release as intended. Still it's not too late to test the RC and 
make sure we don't have anything that needs panic fixing before final.


-  Erno

From: Rykowski, Kamil [mailto:kamil.rykow...@intel.com]
Sent: 27 April 2015 13:10
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Call to action, revisit CIS state

Hi,

I'm responsible for the spec for supporting CIS in the glanceclient, as well as 
the comments which brought some fuss, so would like to clarify some things.


1)  That's right - following scenario hasn't been included at the `Security 
Impact` section. That's because there is no real security impact here and I 
probably should rephrase the sentence to better match the current 
implementation status of the CIS. The user which is using the CIS API is not 
able to read/write any data from the cluster except from images and metadefs. 
He can't just ask for any resource type stored there and expect the results. 
Here is the quote from the spec comments:

Right now any access to resources stored outside of index name `glance` and 
document type `image` and `metadef` is forbidden by CIS. User is only allowed 
to play with documents which are registered within CIS.

Additionally there is an RBAC implemented, but it has been well described in 
the original spec, so I won't repeat it here.

2)   Would like to also address your concern that proposed shape of spec 
allows user to upload arbitrary documents to Elasticsearch (ES is the engine 
used under the hood, we should rather talk about uploading documents to CIS 
service) which are not related to Glance in any way (images  metadefs in 
current implementation). - The meaning of this sentence is (should be) not 
that storing arbitrary documents at CIS is not an issue of Glance. It says 
about uploading documents outside of the Glance mission (that's what I meant by 
not related to Glance) which is prohibited by the CIS.

I would like to make it clear once more - the CIS doesn't allow the API 
consumer to operate on any data except Glance images and metadefinitions. CIS 
is not just exposing raw Elasticsearch capabilities, but it provides strict 
boundaries - using policy checks, RBAC and namespace protection (index/type in 
the Elasticsearch world) of what can be stored within it and what can be 
retrieved from it.

From: Kuvaja, Erno [mailto:kuv...@hp.com]
Sent: Monday, April 27, 2015 12:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [glance] Call to action, revisit CIS state

Hi all,

As you probably know CIS was expanded from Juno metadefs work this cycle based 
on spec [1] provided. The implementation was merged in quite a rush just before 
feature freeze.

During the spec review [2] for client functionality for CIS it came to our 
attention that the implementation exposes Elasticsearch perhaps too openly via 
it's API (namely the creation of datasets allows API consumer uploading 
arbitrary files via the create request).

Call to action: Please review the CIS functionality again for security threats 
and bring them up so we can form a plan if we need to address those and request 
RC3 before release.

I have couple of major concerns about this workflow:

1)  I was shocked after reading following statement from the client spec 
review discussion: During the Kilo release, we - by we I mean the team 
responsible for implementing the CIS - have discussed such scenario, that 
exposing Elasticsearch capabilities to the user consuming the CIS API can bring 
some serious security impact. This discussion nor the scenario was never 
brought to attention of the wider Glance community. The spec bluntly states 
that there is no security impact from the implementation and the concerns 
should have been brought up so reviewers would have had better chance to catch 
possible threats.

2)  Would like to also address your concern that proposed shape of spec 
allows user to upload arbitrary documents to Elasticsearch (ES is the engine 
used under the hood, we should rather talk about uploading documents to CIS 
service) which are not related to Glance in any way (images  metadefs in 
current implementation). Personally I don't think that discussion about 
IF is a valid topic, because we've already implemented backend for CIS at the 
Glance side and you cannot say A without saying B. As long as the code is 
developed under the Glance project and reviewed by glance-core it's outrageous

Re: [openstack-dev] [Glance] Open glance-drivers to all glance-cores

2015-04-20 Thread Kuvaja, Erno
 -Original Message-
 From: Flavio Percoco [mailto:fla...@redhat.com]
 Sent: 20 April 2015 15:04
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Glance] Open glance-drivers to all glance-cores
 
 Hello Glance folks, and not Glance folks :D
 
 Here's a thought. I believe, based on the size of our
 project/community/reviewers team, we should just give access to all glance-
 cores to glance-drivers. Few considerations:
 
 1) Many of our reviewers have been part of Glance even before I became
 part of it. It just makes no sense to me that these folks that have put 
 efforts,
 time and that have helped making Glance what it is today don't have a voice
 on the specs. Commenting seems to not be enough, apparently.
 
 2) I'd like to encourage a more open communication in our specs review
 process and including all our current *code* reviewers seems like a good
 step forward towards that. Things that I'd love to thing and to avoid are:
 
   - I'd love to avoid all kind of private emails/conversations. Specs
 can either be discussed in the review (which is what it's for),
 team meetings or mailing list.
 
   - I'd love for specs to get more attention from other folks. In
 other words, I'd like to scale our specs review process. There are
 specs that have sitten there for a bit.
 
   - Our *code* reviewers know Glance's code, I want them to have a way
 to express a stronger opinion/vote.
 
 3) Although this doesn't seem to work for other projects, I believe Glance is
 not such a big community for this to fail. If anything, it should help us to
 manage the load a bit better. If there are core-reviewers that simply don't
 want to do spec reviews, that's fine.
 
 4) If there are non-core reviewers that want to be part of the glance-drivers
 team then we can vote as we do for new cores. I have to admit that I'm
 having a hard time to imagine a case like this but...
 who knows? right?
 
 5) It used to be like this and many of us just found themselves out of the
 glance-drivers team without notice. It's probably an unexpected side effect
 of disconnecting LP/gerrit and splitting the teams. Not a big deal, but...
 
 Thoughts?
 Flavio
 
 --
 @flaper87
 Flavio Percoco

Hi Flavio,

Thanks for your trust towards us. While I think this is great gesture 
(specially towards us new members) I do not think this is exactly the safest 
option at the moment. We have had active discussion and steep learning curve to 
the specs over past cycle and I think we need to sort that out first. Second 
concern I have is that looking our core-reviewers now, we are actually fairly 
young group since the last flush (give or take half of us have been even core 
less than a year). 

I will jump bit around on this so please try to hang on. For your point 3) I do 
agree. I think we can get there fairly soon if that is what people wants, but 
as mentioned I'd like to get our processes cleared first. 

I'd like to address points 4 and 5 on single hit and _if_ we in future include 
whole core in the drivers we keep the drivers group still separated and 
individual members to that group nominated on similar open manner as we do for 
our cores.

Now last but not least to your point 2) (sorry, I have really no input on 1)). 
I do strongly agree with you on this.  As the specs are supposed to be not just 
an overview of the proposed functionality but also touched quite deeply to the 
technical aspects and as you pointed out that would be great to engage more of 
the code folks to the specs, there would be room for stronger opinion.

What I would propose as alternative instead of including glance-core to 
glance-drivers would be change in the acls of the glance-specs repo. How about 
we give -2..+2 vote to glance-core  glance-drivers and keep the workflow +1 on 
glance-drivers only? This would give us stronger say on the direction but would 
keep the decision of taking the spec out of review (merge) on the drivers until 
we can figure out/agree and _document_ how we are going to process the specs.

Best,
Erno Meeting-the-half-way Kuvaja ;)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][stable][glance] stable/kilo python-glanceclient 0.17.1 release upcoming

2015-04-16 Thread Kuvaja, Erno
Hi all,

We have found critical faults from python-glanceclient causing operational 
failures if https is enabled. We're in process to backport the fixes to our new 
stable branch and will release 0.17.1 of the client as soon as possible.

Please note this if your project is using glanceclient. Affected versions are 
0.16.0..0.17.0

Best,
Erno Kuvaja
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

2015-03-11 Thread Kuvaja, Erno
I'd prefer 1400 as well. But A Foolish Consistency is the Hobgoblin of Little 
Minds

- Erno

 -Original Message-
 From: Nikhil Komawar [mailto:nikhil.koma...@rackspace.com]
 Sent: 11 March 2015 20:40
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Glance] Proposal to change Glance meeting
 time.
 
 I'd prefer to go with 1400 UTC unless there's a majority for 1500UTC.
 
 P.S. It's my feeling that ML announcements and conversations are not
 effective when taking poll from wider audience so we'd discuss this a bit
 more in the next meeting and merge the votes.
 
 Thanks,
 -Nikhil
 
 
 From: Louis Taylor lo...@kragniz.eu
 Sent: Wednesday, March 11, 2015 10:34 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Glance] Proposal to change Glance meeting
 time.
 
 On Wed, Mar 11, 2015 at 02:25:26PM +, Ian Cordasco wrote:
  I have no opinions on the matter. Either 1400 or 1500 work for me. I
  think there are a lot of people asking for it to be at 1500 instead though.
  Would anyone object to changing it to 1500 instead (as long as it is
  one consistent time for the meeting)?
 
 I have no problem with that. I'm +1 on a consistent time, but don't mind
 when it is.
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Nitpicking in code reviews

2015-03-11 Thread Kuvaja, Erno
Hi all,

Following the code reviews lately I've noticed that we (the fan club seems to 
be growing on weekly basis) have been growing culture of nitpicking [1] and 
bikeshedding [2][3] over almost every single change.

Seriously my dear friends, following things are not worth of -1 vote if even 
a comment:

1)  Minor spelling errors on commit messages (as long as the message comes 
through and flags are not misspelled).

2)  Minor spelling errors on comments (docstrings and documentation is 
there and there, but comments, come-on).

3)  Used syntax that is functional, readable and does not break consistency 
but does not please your poem bowel.

4)  Other things you just did not realize to check if they were there. 
After you have gone through the whole change go and look your comments again 
and think twice if your concern/question/whatsoever was addressed somewhere 
else than where your first intuition would have dropped it.

We have relatively high volume for glance at the moment and this nitpicking and 
bikeshedding does not help anyone. At best it just tightens nerves and breaks 
our group. Obviously if there is you had ONE job kind of situations or there 
is relatively high amount of errors combined with something serious it's 
reasonable to ask fix the typos on the way as well. The reason being need to 
increase your statistics, personal perfectionist nature or actually I do not 
care what; just stop or go and do it somewhere else.

Love and pink ponies,

-  Erno

[1] 
www.urbandictionary.com/define.php?term=nitpickinghttp://www.urbandictionary.com/define.php?term=nitpicking
[2] http://bikeshed.com
[3] http://en.wiktionary.org/wiki/bikeshedding

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-11 Thread Kuvaja, Erno


 -Original Message-
 From: Stefano Maffulli [mailto:stef...@openstack.org]
 Sent: 12 March 2015 00:26
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Avoiding regression in project governance
 
 On Wed, 2015-03-11 at 17:59 -0500, Ed Leafe wrote:
  The longer we try to be both sides of this process, the longer we will
  continue to have these back-and-forths about stability vs. innovation.
 
 If I understand correctly your model, it works only for users/operators who
 decide to rely on a vendor to consume OpenStack. There are quite large
 enterprises out there who consume directly the code as it's shipped from
 git.openstack.org, some from trunk others from the stable release .tgz:
 these guys can't count on companies A, B, C or D to put resources to fix their
 problems, because they don't talk to those companies.
 
 One thing I like of your proposal though, when you say:
 
  So what is production-ready? And how would you trust any such
  designation? I think that it should be the responsibility of groups
  outside of OpenStack development to make that call.
 
 This problem has been bugging the European authorities for a long time and
 they've invested quite a lot of money to find tools that would help IT
 managers of the public (and private) sector estimate the quality of open
 source code. It's a big deal in fact when on one hand you have Microsoft and
 IBM sales folks selling your IT managers overpriced stuff that just works
 and on the other hand you have this Linux thing that nobody has heard of,
 it's gratis and I can find it on the web and many say it just works, too...
 crazy, right? Well, at the time it was and to some extent, it still is. So 
 the EU
 has funded lots of research in this area.
 
 One group of researcher that I happen to be familiar with, recently has
 received another bag of Euros and released code/methodologies to evaluate
 and compare open source projects[1]. The principles they use to evaluate
 software are not that hard to find and are quite objective. For
 example: is there a book published about this project? If there is, chances
 are this project is popular enough for a publisher to sell copies. Is the
 project's documentation translated in multiple languages?
 Then we can assume the project is popular. How long has the code been
 around? How large is the pool of contributors? Are there training programs
 offered? You get the gist.
 
 Following up on my previous crazy ideas (did I hear someone yell keep 'em
 coming?), probably a set of tags like:
 
book-exists (or book-chapter-exists)
specific-training-offered
translated-in-1-language (and its bigger brothers translated-in-5,
 translated-in-10+languages)
contributor-size-high (or low, and we can set a rule as we do for the
 diversity metric used in incubation/graduation)
codebase-age-baby, -young and  -mature,  (in classes, like less than 1, 
 1-3,
 3+ years old)
 
 would help a user understand that Nova or Neutron are different from
 (say) Barbican or Zaqar. These are just statements of facts, not a qualitative
 assessment of any of the projects mentioned. At the same time, I have the
 impression these facts would help our users make up their mind.
 
 Thoughts?

Just one, is it too late to change the name, tag is pretty overloaded and I 
rather like the sound of badge. I would be nice to see project working towards 
different new badges and carrying them proudly after earning them. 

Oh another one, I'm not convinced that 3+ years is still mature project. I 
think there is room to look bit out of our own sandbox and think where we are 
in 2, 3 or 5 years time. Perhaps we need to change the governance again, 
perhaps this could be something that is flexible all the way there, but I would 
hate to call Nova, Swift, Glance etc. ancient or granny just because they 
have been around double/triple the mature time.

- Erno
 

 
 [1]
 http://www.ict-prose.eu/2014/12/09/osseval-prose-open-source-
 evaluation-methodology-and-tool/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [log] Log working group -- Alternate moderator needed for today

2015-03-10 Thread Kuvaja, Erno
You mean for tomorrow? No worries, I can kick off the meeting and run through 
agenda if we have something to address.

Take best out of the ops meetup!


-  Erno

From: Rochelle Grober [mailto:rochelle.gro...@huawei.com]
Sent: Tuesday, March 10, 2015 4:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [log] Log working group -- Alternate moderator needed 
for today

Or Cancellation.  I'm in the Ops Midcycle meeting and can't guarantee I can 
join.

Meeting meets Wednesdays at 20:00UTC, which is now 11am PDT.

--Rocky
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][log] Log Working Group priorities

2015-03-05 Thread Kuvaja, Erno
Hi all,

We had our first logging workgroup meeting [1] yesterday where we agreed 3 main 
priorities for the group to focus on. Please review and provide your feedback:


1)  Educating the community About the Logging Guidelines spec

a.   
http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html

b.  Please familiarize yourself with it and try to follow the pointers, fix 
where you see your project being off from these.

c.   If this is the first time you see this spec and you think there is 
something awfully off, please let us know.

2)  Cross project specs for Request IDs and Error codes

a.   There is a spec proposals in Cinder tree [2] for Request IDs and in 
Glance tree [3] for Error codes

b.  The cross project specs are being written on the basis of these specs 
adjusted with the feedback and ideas collected from wider audience at and since 
Paris Summit

c.   Links for the specs will be provided as soon as they are available for 
review

3)  Project Liaisons for Log Working Group [4]

a.   Person helping us out to implement the work items in the project

b.  No need to be core

c.   Please, no fighting for the slots. We happily take all available hands 
onboard on this. :)

[1] http://eavesdrop.openstack.org/meetings/log_wg/2015/
[2] https://review.openstack.org/#/c/156508/
[3] https://review.openstack.org/#/c/127482
[4] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Logging_Working_Group



Thanks,

Erno
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Core nominations.

2015-03-04 Thread Kuvaja, Erno
 From: Nikhil Komawar [mailto:nikhil.koma...@rackspace.com] 
 Sent: Tuesday, March 03, 2015 4:10 PM
 To: OpenStack Development Mailing List (not for usage questions); Daniel P. 
 Berrange
 Cc: krag...@gmail.com
 Subject: Re: [openstack-dev] [Glance] Core nominations.

 If it was not clear in my previous message, I would like to again emphasize 
 that I truly appreciate the vigor and intent behind Flavio's proposal. We 
 need to be proactive and keep making the community better in such regards.

 However, at the same time we need to act fairly, with patience and have a 
 friendly strategy for doing the same (thus maintaining a good balance in our 
 progress). I should probably respond to another thread on ML mentioning my 
 opinion that the community's success depends on trust and empathy and 
 everyone's intent as well as effort in maintaining these principles. Without 
 them, it will not take very long to make the situation chaotic.

snip

 Hence, I think coming with a good plan during the feature freeze period 
 including when and how are we going to implement it, when would be a final 
 draft of cores to be rotated be published, etc. questions would be answered 
 with _patience_ and input from other cores. We would have a plan in K so, 
 that WOULD be a step forward as discussed in the beginning and be implemented 
 in L, ensuring out empathetic stand.  

 The essence of the matter is:
 We need to change the dynamics slowly and with patience for maintaining a 
 good balance.

Based on the above I must vote -- for adding 4 new cores without doing the 
housekeeping. To maintain good balance one needs to achieve that first and I do 
not see how the proposal is slow and patient towards that goal.

- Erno

 Best,
 -Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-03 Thread Kuvaja, Erno
From: John Griffith [mailto:john.griffi...@gmail.com]
Sent: 03 March 2015 14:46
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] auto-abandon changesets considered harmful (was 
Re: [stable][all] Revisiting the 6 month release cycle [metrics])



On Tue, Mar 3, 2015 at 7:18 AM, Kuvaja, Erno 
kuv...@hp.commailto:kuv...@hp.com wrote:
 -Original Message-
 From: Thierry Carrez 
 [mailto:thie...@openstack.orgmailto:thie...@openstack.org]
 Sent: 03 March 2015 10:00
 To: 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] auto-abandon changesets considered harmful
 (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

 Doug Wiegley wrote:
  [...]
  But I think some of the push back in this thread is challenging this notion
 that abandoning is negative, which you seem to be treating as a given.
 
  I don't. At all. And I don't think I'm alone.

 I was initially on your side: the abandoned patches are not really deleted,
 you can easily restore them. So abandoned could just mean inactive or
 stale in our workflow, and people who actually care can easily unabandon
 them to make them active again. And since abandoning is currently the
 only way to permanently get rid of stale / -2ed / undesirable changes
 anyway, so we should just use that.

 But words matter, especially for new contributors. For those contributors,
 someone else abandoning a proposed patch of theirs is a pretty strong
 move. To them, abandoning should be their decision, not yours (reviewers
 can -2 patches).

 Launchpad used to have a similar struggle between real meaning and
 workflow meaning. It used to have a single status for rejected bugs
 (Invalid). In the regular bug workflow, that status would be used for valid
 bugs that you just don't want to fix. But then that created confusion to
 people outside that workflow since the wrong word was used.
 So WontFix was introduced as a similar closed state (and then they added
 Opinion because WontFix seemed too harsh, but that's another story).

 We have (like always) tension around the precise words we use. You say
 Abandon is generally used in our community to set inactive. Jim says
 Abandon should mean abandon and therefore should probably be left to
 the proposer, and other ways should be used to set inactive.

 There are multiple solutions to this naming issue. You can rename abandon
 so that it actually means set inactive or mark as stale.

 Or you can restrict abandon to the owner of a change, stop defaulting to
 is:open to list changes, and introduce features in Gerrit so that a 
 is:active
 query would give you the right thing. But that query would need to be the
 Gerrit default, not some obscure query you can run or add to your dashboard
 -- otherwise we are back at step 1.

 --
 Thierry Carrez (ttx)

I'd like to ask few questions regarding this as I'm very much pro cleaning the 
review queues of abandoned stuff.

How often people (committer/owner/_reviewer_) abandon changes actively? Now I 
do not mean the reviewer here only cores marking other peoples abandoned PSs as 
abandoned I mean how many times you have seen person stating that (s)he will 
not review a change anymore? I haven't seen that, but I've seen lots of changes 
where person has reviewed it on some early stage and 10 revisions later still 
not given ones input again. What I'm trying to say here is that it does not 
make the change any less abandoned if it's not marked abandoned by the owner. 
It's rarely active process.

Regarding the contributor experience, I'd say it's way more harmful not to mark 
abandoned changes abandoned than do so. If the person really don't know and 
can't figure out how to a) join the mailing list b) get to irc c) write a 
comment to the change or d) reach out anyone in the project in any other means 
to express that (s)he does not know how to fix the issue flagged in weeks, I'm 
not sure if we will miss that person as a contributor so much either? And yes, 
the message should be strong telling that the change has passed the point it 
most probably will have no traction anymore and active action needs to be taken 
to continue the workflow. Same time lets turn this around. How many new 
contributors we drive away because of the reaction Whoa, this many changes 
have been sitting here for weeks, I have no chance to get my change quickly in?

Specifically to Nova, Swift  Cinder folks:
How much do you see benefit on bug lifecycle management with the abandoning? I 
would assume bugs that has message their proposed fix abandoned getting way 
more traction than the ones where the fix has been stale in queue for weeks. 
And how many of those abandoned ones gets reactivated?

Last I'd like to point out that life is full of disappointments. We should not 
try to keep our community in bubble where no-one ever gets disappointed nor 
their feelings never gets hurt. I do not appreciate

Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-03 Thread Kuvaja, Erno
 -Original Message-
 From: Thierry Carrez [mailto:thie...@openstack.org]
 Sent: 03 March 2015 10:00
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] auto-abandon changesets considered harmful
 (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])
 
 Doug Wiegley wrote:
  [...]
  But I think some of the push back in this thread is challenging this notion
 that abandoning is negative, which you seem to be treating as a given.
 
  I don't. At all. And I don't think I'm alone.
 
 I was initially on your side: the abandoned patches are not really deleted,
 you can easily restore them. So abandoned could just mean inactive or
 stale in our workflow, and people who actually care can easily unabandon
 them to make them active again. And since abandoning is currently the
 only way to permanently get rid of stale / -2ed / undesirable changes
 anyway, so we should just use that.
 
 But words matter, especially for new contributors. For those contributors,
 someone else abandoning a proposed patch of theirs is a pretty strong
 move. To them, abandoning should be their decision, not yours (reviewers
 can -2 patches).
 
 Launchpad used to have a similar struggle between real meaning and
 workflow meaning. It used to have a single status for rejected bugs
 (Invalid). In the regular bug workflow, that status would be used for valid
 bugs that you just don't want to fix. But then that created confusion to
 people outside that workflow since the wrong word was used.
 So WontFix was introduced as a similar closed state (and then they added
 Opinion because WontFix seemed too harsh, but that's another story).
 
 We have (like always) tension around the precise words we use. You say
 Abandon is generally used in our community to set inactive. Jim says
 Abandon should mean abandon and therefore should probably be left to
 the proposer, and other ways should be used to set inactive.
 
 There are multiple solutions to this naming issue. You can rename abandon
 so that it actually means set inactive or mark as stale.
 
 Or you can restrict abandon to the owner of a change, stop defaulting to
 is:open to list changes, and introduce features in Gerrit so that a 
 is:active
 query would give you the right thing. But that query would need to be the
 Gerrit default, not some obscure query you can run or add to your dashboard
 -- otherwise we are back at step 1.
 
 --
 Thierry Carrez (ttx)
 

I'd like to ask few questions regarding this as I'm very much pro cleaning the 
review queues of abandoned stuff.

How often people (committer/owner/_reviewer_) abandon changes actively? Now I 
do not mean the reviewer here only cores marking other peoples abandoned PSs as 
abandoned I mean how many times you have seen person stating that (s)he will 
not review a change anymore? I haven't seen that, but I've seen lots of changes 
where person has reviewed it on some early stage and 10 revisions later still 
not given ones input again. What I'm trying to say here is that it does not 
make the change any less abandoned if it's not marked abandoned by the owner. 
It's rarely active process.

Regarding the contributor experience, I'd say it's way more harmful not to mark 
abandoned changes abandoned than do so. If the person really don't know and 
can't figure out how to a) join the mailing list b) get to irc c) write a 
comment to the change or d) reach out anyone in the project in any other means 
to express that (s)he does not know how to fix the issue flagged in weeks, I'm 
not sure if we will miss that person as a contributor so much either? And yes, 
the message should be strong telling that the change has passed the point it 
most probably will have no traction anymore and active action needs to be taken 
to continue the workflow. Same time lets turn this around. How many new 
contributors we drive away because of the reaction Whoa, this many changes 
have been sitting here for weeks, I have no chance to get my change quickly in?

Specifically to Nova, Swift  Cinder folks:
How much do you see benefit on bug lifecycle management with the abandoning? I 
would assume bugs that has message their proposed fix abandoned getting way 
more traction than the ones where the fix has been stale in queue for weeks. 
And how many of those abandoned ones gets reactivated?

Last I'd like to point out that life is full of disappointments. We should not 
try to keep our community in bubble where no-one ever gets disappointed nor 
their feelings never gets hurt. I do not appreciate that approach on current 
trend of raising children and I definitely do not appreciate that approach 
towards adults. Perhaps the people with bad experience will learn something 
and get over it or move on. Neither is bad for the community.

- Erno

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Glance] Core nominations.

2015-03-03 Thread Kuvaja, Erno
Nikhil,

If I recall correctly this matter was discussed last time at the start of the 
L-cycle and at that time we agreed to see if there is change of pattern to 
later of the cycle. There has not been one and I do not see reason to postpone 
this again, just for the courtesy of it in the hopes some of our older cores 
happens to make review or two.

I think Flavio's proposal combined with the new members would be the right way 
to reinforce to momentum we've gained in Glance over past few months. I think 
it's also the right message to send out for the new cores (including you and 
myself ;) ) that activity is the key to maintain such status.


-  Erno

From: Nikhil Komawar [mailto:nikhil.koma...@rackspace.com]
Sent: 03 March 2015 04:47
To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage 
questions)
Cc: krag...@gmail.com
Subject: Re: [openstack-dev] [Glance] Core nominations.


Hi all,



After having thoroughly thought about the proposed rotation and evaluating the 
pros and cons of the same at this point of time, I would like to make an 
alternate proposal.



New Proposal:

  1.  We should go ahead with adding more core members now.
  2.  Come up with a plan and give additional notice for the rotation. Get it 
implemented one month into Liberty.

Reasoning:



Traditionally, Glance program did not implement rotation. This was probably 
with good reason as the program was small and the developers were working 
closely together and were aware of each others' daily activities. If we go 
ahead with this rotation it would be implemented for the first time and would 
appear to have happened out-of-the-blue.



It would be good for us to make a modest attempt at maintaining the friendly 
nature of the Glance development team, give them additional notice and 
preferably send them a common email informing the same. We should propose at 
least a tentative plan for rotation so that all the other core members are 
aware of their responsibilities. This brings to my questions, is the poposed 
list for rotation comprehensive? What is the basis for missing out some of 
them? What would be a fair policy or some level of determinism in expectations? 
I believe that we should have input from the general Glance community (and the 
OpenStack community too) for the same.



In order for all this to be sorted out, I kindly request all the members to 
wait until after the k3 freeze, preferably until a time at which people would 
have a bit more time in their hand to look at their mailboxes for unexpected 
proposals of rotation. Once a decent proposal is set, we can announce the 
change-in-dynamics of the Glance program and get everyone interested familiar 
with it during the summit. Whereas, we should not block the currently active 
to-be-core members from doing great work. Hence, we should go ahead with adding 
them to the list.



I hope that made sense. If you've specific concerns, I'm free to chat on IRC as 
well.



(otherwise) Thoughts?


Cheers,
-Nikhil

From: Alexander Tivelkov ativel...@mirantis.commailto:ativel...@mirantis.com
Sent: Tuesday, February 24, 2015 7:26 AM
To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage 
questions)
Cc: krag...@gmail.commailto:krag...@gmail.com
Subject: Re: [openstack-dev] [Glance] Core nominations.

+1 on both proposals: rotation is definitely a step in right direction.



--
Regards,
Alexander Tivelkov

On Tue, Feb 24, 2015 at 1:19 PM, Daniel P. Berrange 
berra...@redhat.commailto:berra...@redhat.com wrote:
On Tue, Feb 24, 2015 at 10:47:18AM +0100, Flavio Percoco wrote:
 On 24/02/15 08:57 +0100, Flavio Percoco wrote:
 On 24/02/15 04:38 +, Nikhil Komawar wrote:
 Hi all,
 
 I would like to propose the following members to become part of the Glance 
 core
 team:
 
 Ian Cordasco
 Louis Taylor
 Mike Fedosin
 Hemanth Makkapati
 
 Please, yes!

 Actually - I hope this doesn't come out harsh - I'd really like to
 stop adding new cores until we clean up our current glance-core list.
 This has *nothing* to do with the 4 proposals mentioned above, they
 ALL have been doing an AMAZING work.

 However, I really think we need to start cleaning up our core's list
 and this sounds like a good chance to make these changes. I'd like to
 propose the removal of the following people from Glance core:

 - Brian Lamar
 - Brian Waldon
 - Mark Washenberger
 - Arnaud Legendre
 - Iccha Sethi
 - Eoghan Glynn
 - Dan Prince
 - John Bresnahan

 None of the folks in the above list have neither provided reviews nor
 have they participated in Glance discussions, meetings or summit
 sessions. These are just signs that their focus have changed.

 While I appreciate their huge efforts in the past, I think it's time
 for us to move forward.

 It goes without saying that all of the folks above are more than
 welcome to join the glance-core team again if their focus goes back to
 Glance.
Yep, rotating out inactive members is an 

Re: [openstack-dev] [api][all][log] - Openstack.error common library

2015-02-25 Thread Kuvaja, Erno
Hi Eugeniya,

Please have a look on the discussion under tag [log]. We’ve been discussing 
around this topic (bit wider, not limiting to API errors) quite regular since 
Paris Summit and we should have X-project specs for review quite soon after the 
Ops meetup. The workgroup meetings will start as well.

Obviously at this point the implementation is open for discussion but so far 
there has been push to implement the tracking into the project trees rather 
than consolidating it under one location.

Could you elaborate bit what you want to have in the headers and why? Our plan 
has been so far having the error code in the message payload so that it’s 
easily available for users and operators.  What this library you’re proposing 
would be actually doing?

We’re more than happy to take extra hands on this so please follow up the [log] 
discussion and feel free to contact me (IRC: jokke_) or Rockyg (in cc as well)  
around what has been done and planned in case you need more clarification.


-  Erno
From: Eugeniya Kudryashova [mailto:ekudryash...@mirantis.com]
Sent: 25 February 2015 14:33
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [api][all] - Openstack.error common library


Hi, stackers!


As was suggested in topic [1], using an HTTP header was a good solution for 
communicating common/standardized OpenStack API error codes.

So I’d like to begin working on a common library, which will collect all 
openstack HTTP API errors, and assign them string error codes. My suggested 
name for library is openstack.error, but please feel free to propose something 
different.


The other question is where we should allocate such project, in openstack or 
stackforge, or maybe oslo-incubator? I think such project will be too massive 
(due to dealing with lots and lots of exceptions)  to allocate it as a part of 
oslo, so I propose developing the project on Stackforge and then eventually 
have it moved into the openstack/ code namespace when the other projects begin 
using the library.


Let me know your feedback, please!


[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2015-January/055549.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The root-cause for IRC private channels (was Re: [all][tc] Lets keep our community open, lets fight for it)

2015-02-19 Thread Kuvaja, Erno
 -Original Message-
 From: Clark Boylan [mailto:cboy...@sapwetik.org]
 Sent: Tuesday, February 17, 2015 6:06 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] The root-cause for IRC private channels (was
 Re: [all][tc] Lets keep our community open, lets fight for it)
 
 On Tue, Feb 17, 2015, at 09:32 AM, Stefano Maffulli wrote:
  Changing the subject since Flavio's call for openness was broader than
  just private IRC channels.
 
  On Tue, 2015-02-17 at 10:37 +, Daniel P. Berrange wrote:
   If cases of bad community behaviour, such as use of passwd protected
   IRC channels, are always primarily dealt with via further private
   communications, then we are denying the voters the information they
   need to hold people to account. I can understand the desire to avoid
   publically shaming people right away, because the accusations may be
   false, or may be arising from a simple mis-understanding, but at
   some point genuine issues like this need to be public. Without this
   we make it difficult for contributors to make an informed decision
   at future elections.
 
  You got my intention right: I wanted to understand better what lead
  some people to create a private channel, what were their needs. For
  that objective, having an accusatory tone won't go anywhere and
  instead I needed to provide them a safe place to discuss and then I
  would report back in the open.
 
  So far, I've only received comments in private from only one person,
  concerned about public logging of channels without notification. I
  wished the people hanging out on at least one of such private channels
  would provide more insights on their choice but so far they have not.
 
  Regarding the why at least one person told me they prefer not to use
  official openstack IRC channels because there is no notification if a
  channel is being publicly logged. Together with freenode not
  obfuscating host names, and eavesdrop logs available to any spammer,
  one person at least is concerned that private information may leak.
  There may also be legal implications in Europe, under the Data
  Protection Directive, since IP addresses and hostnames can be
  considered sensitive data. Not to mention the casual dropping of
  emails or phone numbers in public+logged channels.
 
  I think these points are worth discussing. One easy fix this person
  suggests is to make it default that all channels are logged and write
  a warning on wiki/IRC page. Another is to make the channel bot
  announce whether the channel is logged. Cleaning up the hostname
  details on join/parts from eavesdrop and put the logs behind a login
  (to hide them from spam harvesters).
 
  Thoughts?
 
 It is worth noting that just about everything else is logged too. Git repos 
 track
 changes individuals have made, this mailing list post will be publicly 
 available,
 and so on. At the very least I think the assumption should be that any
 openstack IRC channel is logged and since assumptions are bad we should be
 explicit about this. I don't think this means we require all channels 
 actually be
 logged, just advertise than many are and any can be (because really any
 individual with freenode access can set up public logging).
 
 I don't think we should need to explicitly cleanup our logs. Mostly because
 any individual can set up public logs that are not sanitized.
 Instead IRC users should use tools like cloaks or Tor to get the level of
 obfuscation and security that they desire. Freenode has docs for both, see
 https://freenode.net/faq.shtml#cloaks and
 https://freenode.net/irc_servers.shtml#tor
 
 Hope this helps,
 Clark

Hi Clark,

Sorry to say, but the above is totally irrelevant regarding the current 
legislation.
The legal system does not care individual assumptions like everybody should 
know we are breaking law here. What comes to individuals setting up such 
services, the responsibility of those records are on that individual and that 
individual could potentially get off the hook quite easy by claiming not 
knowing. What comes to OpenStack Foundation doing such activity, one could 
argue how far that but we did not know-attitude carries in court.

[1] The Directive is based on the 1980 OECD Recommendations of the Council 
Concerning guidelines Governing the Protection of Privacy and Trans-Border 
Flows of Personal Data.

These recommendations are founded on seven principles, since enshrined in EU 
Directive 94/46/EC:

Notice: subjects whose data is being collected should be given notice of 
such collection.
Purpose: data collected should be used only for stated purpose(s) and for 
no other purposes.
Consent: personal data should not be disclosed or shared with third parties 
without consent from its subject(s).
Security: once collected, personal data should be kept safe and secure from 
potential abuse, theft, or loss.
Disclosure: subjects whose personal data is being collected should be 
informed as to the 

[openstack-dev] [glance] Cleanout of inactive change proposals from review

2015-02-13 Thread Kuvaja, Erno
Hi all,

We have almost year old (from last update) reviews still in the queue for 
glance. The discussion was initiated on yesterday's meeting for adopting 
abandon policy for stale changes.

The documentation can be found from 
https://etherpad.openstack.org/p/glance-cleanout-of-inactive-PS and any input 
would be appreciated. For your convenience current state below:

Glance - Cleanout of inactive change proposals from review


We Should start cleaning out our review list to keep the focus on changes that 
has momentum. Nova is currently abandoning change proposals that has been 
inactive for 4 weeks.

Proposed action (if all of the following is True, abandon the PS):

  1.  The PS has -1/-2 (including Jenkins)

  1.  The change is proposed to glance, glance_store or python-glanceclient; 
specs should not be abandoned as their workflow is much slower

  1.  No activity for 28 days from Author/Owner after the -1/-2

  1.  There has been  query made to the owner to update the patch between 5 and 
10 days  before abandoning (comment on PS/Bug or something similar)


  *   Let's be smart on this. Flexibility is good on holiday seasons, during 
feature freeze, etc.




-  Erno
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Cleanout of inactive change proposals from review

2015-02-13 Thread Kuvaja, Erno
Hi Boris,

Thanks for your input. I do like the idea of picking up the changes that have 
not been active. Do you have resources in mind to dedicate for this?

My personal take is that if some piece of work has not been touched for a 
month, it’s probably not that important after all and the community should use 
the resources to do some work that has actual momentum. The changes itself will 
not disappear the owner is still able to revive it if felt that there is right 
time to continue it. The cleanup will just make it easier for people to focus 
on things that are actually moving. It also will make bug tracking bit easier 
when one will see on the bug report that the patch got abandoned due to 
inactivity and indicates that the owner of that bug might not be working on it 
after all.


-  Erno

From: Boris Pavlovic [mailto:bpavlo...@mirantis.com]
Sent: Friday, February 13, 2015 1:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Cleanout of inactive change proposals 
from review

Hi,

I believe that keeping review queue clean is the great idea.
But I am not sure that set of these rules is enough to abandon patches.

Recently I wrote blogpost related to making OpenStack community more user 
friendly:
http://boris-42.me/thoughts-on-making-openstack-community-more-user-friendly/

tl;dr;

Patches on review are great source of information what is missing in project.
Removing them from queue means losing this essential information. The result
of such actions is that project doesn't face users requirements which is quite 
bad...

What if that project team continue work on all abandoned patches  that are 
covering
valid use cases and finish them?

Best regards,
Boris Pavlovic



On Fri, Feb 13, 2015 at 3:52 PM, Flavio Percoco 
fla...@redhat.commailto:fla...@redhat.com wrote:
On 13/02/15 11:06 +, Kuvaja, Erno wrote:
Hi all,

We have almost year old (from last update) reviews still in the queue for
glance. The discussion was initiated on yesterday’s meeting for adopting
abandon policy for stale changes.

The documentation can be found from https://etherpad.openstack.org/p/
glance-cleanout-of-inactive-PS and any input would be appreciated. For your
convenience current state below:

Thanks for putting this together. I missed the meeting yday and this
is important.
Glance - Cleanout of inactive change proposals from review


We Should start cleaning out our review list to keep the focus on changes that
has momentum. Nova is currently abandoning change proposals that has been
inactive for 4 weeks.



Proposed action (if all of the following is True, abandon the PS):

1. The PS has -1/-2 (including Jenkins)

I assume you're talking about voting -1/-2 and not Workflow, right?
(you said jenkins afterall but just for the sake of clarity).
2. The change is proposed to glance, glance_store or python-glanceclient;
   specs should not be abandoned as their workflow is much slower

3. No activity for 28 days from Author/Owner after the -1/-2

I'd reword this in No activity. This includes comments, feedback,
discussions and or other committers taking over a patch.
4. There has been  query made to the owner to update the patch between 5 and
   10 days  before abandoning (comment on PS/Bug or something similar)

 ● Let's be smart on this. Flexibility is good on holiday seasons, during
   feature freeze, etc.

+2 to the above, I like it.

Thanks again,
Flavio

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Cleanout of inactive change proposals from review

2015-02-13 Thread Kuvaja, Erno
Hi,

Getting so mixed that I’ll jump to the inline commenting as well.

From: Boris Pavlovic [mailto:bo...@pavlovic.me]
Sent: 13 February 2015 15:01
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Cleanout of inactive change proposals 
from review

Erno,


My personal take is that if some piece of work has not been touched for a 
month, it’s probably not that important after all and the community should use 
the resources to do some work that has actual momentum.

Based on my experience, one of the most common situation in OpenStack is next:
1) Somebody makes fast (but with right idea) changes, because he (and usually 
others) needs it
2) It doesn't pass review process fast
3) Author of this patch has billions others tasks (not related to upstream) and
can't work on this change anymore
4) Patch get's abounded and forgotten

I’m unfortunately starting to sound like a broken record but again. If no-one 
has touched the change (or taken it over) in 4 weeks at the point when there is 
clear indication that if the change does not get traction it will be cleaned 
from the review, it’s probably not worth of keeping there any longer either.

The changes itself will not disappear the owner is still able to revive it if 
felt that there is right time to continue it.

Nobody never reviews abounded changes..

Repeating the previous, if your change gets abandoned because of inactivity and 
you don’t care about it, why should someone else who haven’t cared so far?

 The cleanup will just make it easier for people to focus on things that are 
actually moving.

Making decision based on activity around patches is not the best way to do 
things.

So what would be better way to do it? We have currently 4 pages of change 
proposals in the review that has been touched by anyone in Feb. Honest 
question, who scrolls further than that or even down to that 4th page? From 
page 6 forward there are changes that has been last time touched last year. And 
this is purely from “updated” column, so I did not look when the 
owner/author/committer has last time touched these.

If we take a look at the structure of OpenStack projects we will see next 
things:

1) Things that re moving fast/good are usually related to things that core team 
(or active members) are working on.
This team is resolving limited set of use cases (this is mostly related because 
not every member is running it's own production cloud)

This is very true, dropping that core team away and let’s keep the active 
members here. Because it’s a community it’s extremely difficult to get people 
working on something else than what they or their employers sees important.

2) Operators/Admins/DevOps that are running their own cloud have a lot of 
experience knows a lot of missing use cases and
source of issues. But usually they are not involved in community process so 
they don't know whole road map of project, so they are not able to fully align 
their patches with road map, or eventually just don't have enough time to work 
on features.

So abounding patches from 2 group just because of inactivity can make big harm 
to project.

I don’t think pushing for activity is bad thing and would do big harm for 
project(s). These are matters of priority and I do not see any benefit keeping 
changes in review that haven’t been touched for months (current situation). If 
this group 2 is the fundamental issue of our changes stalling in review, we 
need to fix that rather than let it clutter the queue. We are talking open 
source project and community here. I find it extremely hard to justify asking 
anyone in the community to take responsibility of someone else’s production 
cloud if they have no interest to resource the above for the benefit of their 
own business.

 Do you have resources in mind to dedicate for this?

Sometimes I am doing it by my self, sometimes newbies in community (that want 
some work to get involved), sometimes core team is working on old patches.

We will not run out of the bug fixing work and the commits against those bugs 
will stay in the bug even after they get abandoned.

Important chagesets are supposed to have bugs (or blueprints) assigned
to them, so, even if the CS is abandoned, its description still
remains on Launchpad in one form or another, so we will not loose it
from general project's backlog

This is not true in a lot of cases. =)
 In many cases DevOps/Operators don't know or don't want to spend time for 
launchpad/specs/ and so on.

Then we need to educate and encourage them instead of support the behavior of 
“Throw it in and someone at some day will maybe take care of it”-attitude.


-  Erno

Best regards,
Boris Pavlovic


On Fri, Feb 13, 2015 at 5:17 PM, Kuvaja, Erno 
kuv...@hp.commailto:kuv...@hp.com wrote:
Hi Boris,

Thanks for your input. I do like the idea of picking up the changes that have 
not been active. Do you have resources in mind to dedicate for this?

My

Re: [openstack-dev] [glance] Cleanout of inactive change proposals from review

2015-02-13 Thread Kuvaja, Erno
 -Original Message-
 From: James E. Blair [mailto:cor...@inaugust.com]
 Sent: 13 February 2015 16:44
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [glance] Cleanout of inactive change proposals
 from review
 
 Kuvaja, Erno kuv...@hp.com writes:
 
  Hi all,
 
  We have almost year old (from last update) reviews still in the queue
  for glance. The discussion was initiated on yesterday's meeting for
  adopting abandon policy for stale changes.
 
 Hi,
 
 Abandoning changes submitted by other people is not a good experience for
 people who are contributing to OpenStack, but fortunately, it is not
 necessary.
 
 Our current version of Gerrit supports a rich syntax for searching, which you
 can use to create personal or project dashboards.  It is quite easy to filter 
 out
 changes that appear old or inactive, without the negative experience of
 having them abandoned.
 
 Many projects, including all of the infra projects (which see a substantial
 number of changes) are able to function without automatically abandoning
 changes.
 
 If you could identify why you feel the need to abandon other peoples
 changes, I'm sure we can find a resolution.
 
 -Jim

Hi Jim,

I think you hit spot on here. It's extremely difficult to automate anything 
like this being smart and flexible. ;)

- Erno
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-12 Thread Kuvaja, Erno
 -Original Message-
 From: Donald Stufft [mailto:don...@stufft.io]
 Sent: Wednesday, February 11, 2015 4:34 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all][tc] Lets keep our community open, lets
 fight for it
 
 
  On Feb 11, 2015, at 11:15 AM, Jeremy Stanley fu...@yuggoth.org wrote:
 
  On 2015-02-11 11:31:13 + (+), Kuvaja, Erno wrote:
  [...]
  If you don't belong to the group of privileged living in the area and
  receiving free ticket somehow or company paying your participation
  you're not included. $600 + travel + accommodation is quite hefty
  premium to be included, not really FOSS.
  [...]
 
  Here I have to respectfully disagree. Anyone who uploads a change to
  an official OpenStack source code repository for review and has it
  approved/merged since Juno release day gets a 100% discount comp
  voucher for the full conference and design summit coming up in May.
  In addition, much like a lot of other large free software projects do
  for their conferences, the OpenStack Foundation sets aside funding[1]
  to cover travel and lodging for participants who need it.
  Let's (continue to) make sure this _is_ really FOSS, and that any of
  our contributors who want to be involved can be involved.
 
  [1] https://wiki.openstack.org/wiki/Travel_Support_Program
 
 For whatever it's worth, I totally agree that the summits don't make
 Openstack not really FOSS and I think the travel program is great, but I do
 just want to point out (as someone for whom travel is not monetarily dificult,
 but
 logistically) that decision making which requires travel can be exclusive. I
 don't personally get too bothered by it but it feels like maybe the
 fundamental issue that some are expericing is when there are decisions
 being made via a single channel, regardless of if that channel is a phone 
 call,
 IRC, a mailing list, or a design summit. The more channels any particular
 decision involves the more likely it is nobody is going to feel like they 
 didn't
 get a chance to participate.
 
 ---
 Donald Stufft
 PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

Thanks Donald,

My point exactly even I now see it did not come out really that way.

Thanks Jeremy,

I'd like to point out that that this discussion has been pushing all inclusive 
open approach. Not ATC, not specially approved individuals, but everyone. 
Mailing list can easily facilitate participation of everyone who wishes to do 
so. Summits cannot. If we pull the line to ATCs and specially invited 
individuals, we can throw this whole topic to the trash as 90% of the discussed 
was just dismissed.

All,

I'm not attacking against having summits, I think the face to face time is 
incredibly valuable for all kind of things. My point was to bring up general 
flaw of the flow between all inclusive decision making vs. decided in summit 
session.

- Erno

 
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-11 Thread Kuvaja, Erno
 -Original Message-
 From: Flavio Percoco [mailto:fla...@redhat.com]
 Sent: Wednesday, February 11, 2015 9:55 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [all][tc] Lets keep our community open, lets fight
 for it
 
 Greetings all,
 
 During the last two cycles, I've had the feeling that some of the things I 
 love
 the most about this community are degrading and moving to a state that I
 personally disagree with. With the hope of seeing these things improve, I'm
 taking the time today to share one of my concerns.
 
 Since I believe we all work with good faith and we *all* should assume such
 when it comes to things happening in our community, I won't make names
 and I won't point fingers - yes, I don't have enough fingers to point based on
 the info I have. People that fall into the groups I'll mention below know that
 I'm talking to them.
 
 This email is dedicated to the openness of our community/project.
 
 ## Keep discussions open
 
 I don't believe there's anything wrong about kicking off some discussions in
 private channels about specs/bugs. I don't believe there's anything wrong in
 having calls to speed up some discussions.
 HOWEVER, I believe it's *completely* wrong to consider those private
 discussions sufficient. If you have had that kind of private discussions, if
 you've discussed a spec privately and right after you went upstream and said:
 This has been discussed in a call and it's good to go, I beg you to stop 
 for 2
 seconds and reconsider that. I don't believe you were able to fit all the
 community in that call and that you had enough consensus.

++
 
 Furthermore, you should consider that having private conversations, at the
 very end, doesn't help with speeding up discussions. We've a community of
 people who *care* about the project they're working on.
 This means that whenever they see something that doesn't make much
 sense, they'll chime in and ask for clarification. If there was a private
 discussion on that topic, you'll have to provide the details of such 
 discussion
 and bring that person up to date, which means the discussion will basically
 start again... from scratch.

And when they do come and ask for clarification do not just state that this was 
discussed and agreed already.
 
 ## Mailing List vs IRC Channel
 
 I get it, our mailing list is freaking busy, keeping up with it is hard and 
 time
 consuming and that leads to lots of IRC discussions. I don't think there's
 anything wrong with that but I believe it's wrong to expect *EVERYONE* to
 be in the IRC channel when those discussions happen.
 
 If you are discussing something on IRC that requires the attention of most of
 your project's community, I highly recommend you to use the mailing list as
 oppose to pinging everyone independently and fighting with time zones.
 Using IRC bouncers as a replacement for something that should go to the
 mailing list is absurd. Please, use the mailing list and don't be afraid of 
 having
 a bigger community chiming in in your discussion.  *THAT'S A GOOD THING*
 
 Changes, specs, APIs, etc. Everything is good for the mailing list.
 We've fought hard to make this community grow, why shouldn't we take
 advantage of it?

This is tough call ... ~ real time communication is just so much more 
efficient. You can get things done in minutes that would take hours  days to 
deal with over e-mail. It also does not help that the -dev mailing list is 
really crowded, the tags are not consistent (sorry for finger pointing but oslo 
seems to be specially inconsistent with some tagging [oslo] some tagging 
[oslo.something] etc. Please keep that [oslo] there ;D ).

I would not discourage people to use irc or other communication means, just 
being prepared to answer those questions again.
 
 ## Cores are *NOT* special
 
 At some point, for some reason that is unknown to me, this message
 changed and the feeling of core's being some kind of superheros became a
 thing. It's gotten far enough to the point that I've came to know that some
 projects even have private (flagged with +s), password protected, irc
 channels for core reviewers.
 
 This is the point where my good faith assumption skill falls short.
 Seriously, don't get me wrong but: WHAT IN THE ACTUAL F**K?
 
 THERE IS ABSOLUTELY NOTHING PRIVATE FOR CORE REVIEWERS*
 TO DISCUSS.

Here I do disagree. There is stuff called private bugs for security etc. that 
_should_ kept private. Again speeds up progress hugely when the discussion does 
not need to happen in Launchpad and it keeps the bug itself cleaner as well. I 
do agree that there should not be secret society making common decisions behind 
closed doors, but there is reasons to keep some details initially between 
closed group only. And most commonly that closed group seems to be cores.
 
 If anything core reviewers should be the ones *FORCING* - it seems that
 *encouraging* doesn't have the same effect anymore - *OPENNESS* in
 order to include other 

Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes

2015-02-03 Thread Kuvaja, Erno
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 02 February 2015 16:19
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes
 
 On 02/01/2015 06:20 PM, Morgan Fainberg wrote:
  Putting on my sorry-but-it-is-my-job-to-get-in-your-way hat (aka
 security), let's be careful how generous we are with the user and data we
 hand back. It should give enough information to be useful but no more. I
 don't want to see us opened to weird attack vectors because we're exposing
 internal state too generously.
 
  In short let's aim for a slow roll of extra info in, and evaluate each data 
  point
 we expose (about a failure) before we do so. Knowing more about a failure is
 important for our users. Allowing easy access to information that could be
 used to attack / increase impact of a DOS could be bad.
 
  I think we can do it but it is important to not swing the pendulum too far
 the other direction too fast (give too much info all of a sudden).
 
 Security by cloud obscurity?
 
 I agree we should evaluate information sharing with security in mind.
 However, the black boxing level we have today is bad for OpenStack. At a
 certain point once you've added so many belts and suspenders, you can no
 longer walk normally any more.

++
 
 Anyway, lets stop having this discussion in abstract and actually just 
 evaluate
 the cases in question that come up.

++

- Erno
 
   -Sean
 
 --
 Sean Dague
 http://dague.net
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes

2015-02-03 Thread Kuvaja, Erno
Now in my understanding our services does not log to user. The user gets 
whatever error message/exception it happens to be thrown at. This is exactly 
Why we need some common identifier between them (and who ever offers request ID 
being that, I can get some of my friends with well broken English calling you 
and trying to give that to you over phone ;) ).

More inline.

 -Original Message-
 From: Rochelle Grober [mailto:rochelle.gro...@huawei.com]
 Sent: 02 February 2015 21:34
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes
 
 What I see in this conversation is that we are talking about multiple 
 different
 user classes.
 
 Infra-operator needs as much info as possible, so if it is a vendor driver 
 that is
 erring out, the dev-ops can see it in the log.

NO! Absolutely not. This is where we need to be careful what we classify as 
DEBUG and what INFO+ as the ops definitely do not need nor want it all. 
 
 Tenant-operator is a totally different class of user.  These guys need VM
 based logs and virtual network based logs, etc., but should never see as far
 under the covers as the infra-ops *has* to see.

They see pretty much just the error messages raised to them, not the cloud 
infra logs anyways. What we need to do is to be more helpful towards them what 
they should and can help themselves with and where they would need ops help.
 
 So, sounds like a security policy issue of what makes it to tenant logs and
 what stays in the data center thing.

Logs should never contain sensitive information (URIs, credentials, etc.) 
regardless where they are stored. Again obscurity is not security either.
 
 There are *lots* of logs that are being generated.  It sounds like we need
 standards on what goes into which logs along with error codes,
 logging/reporting levels, criticality, etc.

We need guidelines. Now it's really hard to come by with tight rules how things 
needs to be logged as backend failure can be critical for some services while 
others might not care too much about that. (For example if swift has disk down, 
it's not catastrophic failure, they just move to next copy. But if back end 
store is down for glance, we can do pretty much nothing. Now should these two 
back end store failures be logged same way, no they should not.)

We need to keep the decision in the projects as mostly they are the only ones 
knowing how specific error condition affects the service. Also if the rules 
does not fit, it's really difficult to enforce for those, so let's not pick 
that fight.

- Erno
 
 --Rocky
 
 (bcc'ing the ops list so they can join this discussion, here)
 
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: Monday, February 02, 2015 8:19 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes
 
 On 02/01/2015 06:20 PM, Morgan Fainberg wrote:
  Putting on my sorry-but-it-is-my-job-to-get-in-your-way hat (aka
 security), let's be careful how generous we are with the user and data we
 hand back. It should give enough information to be useful but no more. I
 don't want to see us opened to weird attack vectors because we're exposing
 internal state too generously.
 
  In short let's aim for a slow roll of extra info in, and evaluate each data 
  point
 we expose (about a failure) before we do so. Knowing more about a failure is
 important for our users. Allowing easy access to information that could be
 used to attack / increase impact of a DOS could be bad.
 
  I think we can do it but it is important to not swing the pendulum too far
 the other direction too fast (give too much info all of a sudden).
 
 Security by cloud obscurity?
 
 I agree we should evaluate information sharing with security in mind.
 However, the black boxing level we have today is bad for OpenStack. At a
 certain point once you've added so many belts and suspenders, you can no
 longer walk normally any more.
 
 Anyway, lets stop having this discussion in abstract and actually just 
 evaluate
 the cases in question that come up.
 
   -Sean
 
 --
 Sean Dague
 http://dague.net
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Glance] IRC logging

2015-01-13 Thread Kuvaja, Erno


 -Original Message-
 From: Dave Walker [mailto:em...@daviey.com]
 Sent: 13 January 2015 15:10
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Glance] IRC logging
 
 On 13 January 2015 at 12:32, Kuvaja, Erno kuv...@hp.com wrote:
  I'm heavily against the public logging to the level that I will just leave 
  the
 channel if that will be enabled. My point is not foul language and I do
 understand that there could be some benefits out of it. Personally I think we
 have enough tracked public communication means like ask.openstack.org
 and the mailing lists. IRC is and has always been real time communications
 with defined audience.
 
  I think the major benefits of this defined audience are:
  1) One does not need to express themselves in a way that is for public. (
 Misunderstandings can be corrected on the fly if needed. ) There is no need
 to explain to anyone reading the logs what you actually meant during the
 conversation month ago.
  2) there is level of confidentiality within that defined audience. (
  For example someone not familiar with the processes thinks they have
  found security vulnerability and comes to the IRC-channel to ask
  second opinion. Those details are not public and that bug can still be
  raised and dealt properly. Once the discussion is logged and the logs
  are publicly available the details are publicly available as well. )
  3) That defined audience does not usually limit content. I have no problem
 to throw my e-mail address, phone number etc. into the channel, I would not
 yell them out publicly.
 
  For me personally the last point is the biggest problem, professionally the
 second is major concern. I have been using IRC for so long time that I'm not
 willing to take the risk I can't filter myself on my regular channels. 
 Meetings
 are different story as there it is defined time and at least I'm on meeting
 mode that time knowing it will be publicly logged.
 
  The channels are not locked so anyone can keep a client online and log it
 for themselves if they feel need for it and lots of people do so. There is 
 just
 that big barrier having it within the defined group you can see on the channel
 versus public to anyone.
 
  As opposed to Cindy's original statement of not wanting to be available off-
 hours, that's solved already: you can set your client to away or not respond.
 It's really common on any IRC network that nick is online while user is not or
 is ignoring that real time outreach by personal preference. No-one
 will/should take that personally or offensive. Not having bouncer/shell to run
 your client is as well personal preference, I doubt anyone can claim they
 could not do it with the options available nowadays.
 
   - Erno (jokke_) Kuvaja
 
 
 Hi,
 
 I think these concerns are more based around fear, than any real merit.  I
 would suggest that any IRC communication should be treated as public, and
 therefore the idea of bouncing around personal contacts details is pretty
 poor personal security.  If this is required, then using private messages 
 would
 seem to be perfectly suitable.
 
 A user can join any #openstack-* channel, and not necessarily be a friend of
 the project.  The concerns about security issues should be treated as if they
 have already become public.
 
 It seems that Openstack currently has around 40 non-meeting channels
 logged[0] and contrasting with the Ubuntu project, there are some 350 public
 logged channels[1] - with the logs going back to 2004.  This has caused little
 issue over the years.
 
 It would seem logical to introduce project-wide irc logging IMO.  I
 *have* found it useful to search through archives of projects, and find it
 frustrating when this data is not available.
 
 I really struggle with the idea that contributors of a developer channel do 
 not
 consider themselves to be talking in a public forum, which to me - is the same
 as being logged.  Without this mindset, the channel (and project?) merely
 becomes a cabal developers area.
 
 [0] http://eavesdrop.openstack.org/irclogs/
 [1] http://irclogs.ubuntu.com/2015/01/01/
 
 --
 Kind Regards,
 Dave Walker

I do not have a problem to tell my phone number to someone at my local which is 
packed with people I do not know and they might hear it, I would have problem 
with my local if they would start recording all discussions in their premises 
and posting those publicly in the internet. I don't have even problem X people 
recording their visits there as long as it stays in their private collection, 
again same thing I would have problem them putting those records out public and 
I would try to ensure not being in their vicinity. Why should I/we/one treat 
IRC differently to any other public venue of discussion?

- Erno
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ

Re: [openstack-dev] [Glance] IRC logging

2015-01-13 Thread Kuvaja, Erno
I'm heavily against the public logging to the level that I will just leave the 
channel if that will be enabled. My point is not foul language and I do 
understand that there could be some benefits out of it. Personally I think we 
have enough tracked public communication means like ask.openstack.org and the 
mailing lists. IRC is and has always been real time communications with defined 
audience. 

I think the major benefits of this defined audience are:
1) One does not need to express themselves in a way that is for public. ( 
Misunderstandings can be corrected on the fly if needed. ) There is no need to 
explain to anyone reading the logs what you actually meant during the 
conversation month ago.
2) there is level of confidentiality within that defined audience. ( For 
example someone not familiar with the processes thinks they have found security 
vulnerability and comes to the IRC-channel to ask second opinion. Those details 
are not public and that bug can still be raised and dealt properly. Once the 
discussion is logged and the logs are publicly available the details are 
publicly available as well. )
3) That defined audience does not usually limit content. I have no problem to 
throw my e-mail address, phone number etc. into the channel, I would not yell 
them out publicly.

For me personally the last point is the biggest problem, professionally the 
second is major concern. I have been using IRC for so long time that I'm not 
willing to take the risk I can't filter myself on my regular channels. Meetings 
are different story as there it is defined time and at least I'm on meeting 
mode that time knowing it will be publicly logged.

The channels are not locked so anyone can keep a client online and log it for 
themselves if they feel need for it and lots of people do so. There is just 
that big barrier having it within the defined group you can see on the channel 
versus public to anyone. 

As opposed to Cindy's original statement of not wanting to be available 
off-hours, that's solved already: you can set your client to away or not 
respond. It's really common on any IRC network that nick is online while user 
is not or is ignoring that real time outreach by personal preference. No-one 
will/should take that personally or offensive. Not having bouncer/shell to run 
your client is as well personal preference, I doubt anyone can claim they could 
not do it with the options available nowadays.

 - Erno (jokke_) Kuvaja

 -Original Message-
 From: Nikhil Komawar [mailto:nikhil.koma...@rackspace.com]
 Sent: 05 January 2015 19:11
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Glance] IRC logging
 
 Thanks Cindy!
 
 Glance cores, can you all please pitch in?
 
 -Nikhil
 
 
 From: Cindy Pallares [cpalla...@redhat.com]
 Sent: Monday, January 05, 2015 12:28 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Glance] IRC logging
 
 I've made a patch, we can vote on it there.
 
 https://review.openstack.org/#/c/145025/
 
 
 On 01/05/2015 11:15 AM, Amrith Kumar wrote:
  I think logging the channel is a benefit even if, as Nikhil points out, it 
  is not
 an official meeting. Trove logs both the #openstack-trove channel and the
 meetings when they occur. I have also had some conversations with other
 ATC's on #openstack-oslo and #openstack-security and have found that the
 eavesdrop logs at http://eavesdrop.openstack.org/irclogs/ to be invaluable
 in either bug comments or code review comments.
 
  The IRC channel is an integral part of communicating within the OpenStack
 community. The use of foul language and other inappropriate behavior
 should be monitored not by admins but by other members of the community
 and called out just as one would call out similar behavior in a non-virtual 
 work
 environment. I submit to you that profanity and inappropriate conduct in an
 IRC channel constitutes a hostile work environment just as much as it does in
 a non-virtual environment.
 
  Therefore I submit to you that there is no place for such behavior on an IRC
 channel irrespective of whether it is logged or not.
 
  Thanks,
 
  -amrith
 
  | -Original Message-
  | From: Morgan Fainberg [mailto:morgan.fainb...@gmail.com]
  | Sent: Monday, January 05, 2015 11:58 AM
  | To: OpenStack Development Mailing List (not for usage questions)
  | Subject: Re: [openstack-dev] [Glance] IRC logging
  |
  |
  |
  |  On Jan 5, 2015, at 08:07, Nikhil Komawar
  |  nikhil.koma...@rackspace.com
  | wrote:
  | 
  |  Based on the feedback received, we would like to avoid logging on
  |  the
  | project channel. My take from the discussion was that it gives many
  | a folks a feeling of informal platform to express their ideas freely
  | in contrast to the meeting channels.
  | 
  |  However, at the same time I would like to point out that using
  |  foul
  | language in the open freenode channels is a bad 

Re: [openstack-dev] [Glance] IRC logging

2015-01-13 Thread Kuvaja, Erno
 -Original Message-
 From: Thierry Carrez [mailto:thie...@openstack.org]
 Sent: 13 January 2015 13:02
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Glance] IRC logging
 
 Kuvaja, Erno wrote:
  [...]
  1) One does not need to express themselves in a way that is for public. (
 Misunderstandings can be corrected on the fly if needed. ) There is no need
 to explain to anyone reading the logs what you actually meant during the
 conversation month ago.
  2) there is level of confidentiality within that defined audience. (
  For example someone not familiar with the processes thinks they have
  found security vulnerability and comes to the IRC-channel to ask
  second opinion. Those details are not public and that bug can still be
  raised and dealt properly. Once the discussion is logged and the logs
  are publicly available the details are publicly available as well. )
  3) That defined audience does not usually limit content. I have no problem
 to throw my e-mail address, phone number etc. into the channel, I would not
 yell them out publicly.
  [...]
 
 All 3 arguments point to issues you have with *public* channels, not
 *logged* channels.
 
 Our IRC channels are, in effect, already public. Anyone can join them, anyone
 can log them. An embargoed vulnerability discussed on an IRC channel
 (logged or not) should be considered leaked. I agree that logging makes it
 easier for random people to access that already-public information, but you
 can't consider an IRC channel private (and change your communication style
 or content) because it's not logged by eavesdrop.
 
 What you seem to be after is a private, invitation-only IRC channel.
 That's an orthogonal issue to the concept of logging.

Nope, what I'm saying is that I'm opposing public logging to the level that I 
will not be part if it will be enabled. If someone start publishing the logs 
they collect from the channel my response is the same I will ask to stop doing 
that and if it's not enough I will just leave.  I do not use tinfoil hat nor 
live in a bubble thinking that information is private but I prefer not to make 
it more obvious. And your private channel would not solve someone logging and 
publishing the logs anyways, any level of privacy in the communication is 
based on trust was it participants, service/venue providers or something else 
so lets not make it more difficult than it is.

- Erno
 
 --
 Thierry Carrez (ttx)
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-18 Thread Kuvaja, Erno
I think that's horrible idea. How do we do that store independent with the 
linking dependencies?

We should not depend universal use case like this on limited subset of 
backends, specially non-OpenStack ones. Glance (nor Nova) should never depend 
having direct access to the actual medium where the images are stored. I think 
this is school book example for something called database. Well arguable if 
this should be tracked at Glance or Nova, but definitely not a dirty hack 
expecting specific backend characteristics.

As mentioned before the protected image property is to ensure that the image 
does not get deleted, that is also easy to track when the images are queried. 
Perhaps the record needs to track the original state of protected flag, image 
id and use count. 3 column table and couple of API calls. Lets not at least 
make it any more complicated than it needs to be if such functionality is 
desired.


-  Erno

From: Nikhil Komawar [mailto:nikhil.koma...@rackspace.com]
Sent: 17 December 2014 20:34
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use?

Guess that's a implementation detail. Depends on the way you go about using 
what's available now, I suppose.

Thanks,
-Nikhil

From: Chris St. Pierre [chris.a.st.pie...@gmail.com]
Sent: Wednesday, December 17, 2014 2:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use?
I was assuming atomic increment/decrement operations, in which case I'm not 
sure I see the race conditions. Or is atomism assuming too much?

On Wed, Dec 17, 2014 at 11:59 AM, Nikhil Komawar 
nikhil.koma...@rackspace.commailto:nikhil.koma...@rackspace.com wrote:
That looks like a decent alternative if it works. However, it would be too racy 
unless we we implement a test-and-set for such properties or there is a 
different job which queues up these requests and perform sequentially for each 
tenant.

Thanks,
-Nikhil

From: Chris St. Pierre 
[chris.a.st.pie...@gmail.commailto:chris.a.st.pie...@gmail.com]
Sent: Wednesday, December 17, 2014 10:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use?
That's unfortunately too simple. You run into one of two cases:

1. If the job automatically removes the protected attribute when an image is no 
longer in use, then you lose the ability to use protected on images that are 
not in use. I.e., there's no way to say, nothing is currently using this 
image, but please keep it around. (This seems particularly useful for 
snapshots, for instance.)

2. If the job does not automatically remove the protected attribute, then an 
image would be protected if it had ever been in use; to delete an image, you'd 
have to manually un-protect it, which is a workflow that quite explicitly 
defeats the whole purpose of flagging images as protected when they're in use.

It seems like flagging an image as *not* in use is actually a fairly difficult 
problem, since it requires consensus among all components that might be using 
images.

The only solution that readily occurs to me would be to add something like a 
filesystem link count to images in Glance. Then when Nova spawns an instance, 
it increments the usage count; when the instance is destroyed, the usage count 
is decremented. And similarly with other components that use images. An image 
could only be deleted when its usage count was zero.

There are ample opportunities to get out of sync there, but it's at least a 
sketch of something that might work, and isn't *too* horribly hackish. Thoughts?

On Tue, Dec 16, 2014 at 6:11 PM, Vishvananda Ishaya 
vishvana...@gmail.commailto:vishvana...@gmail.com wrote:
A simple solution that wouldn't require modification of glance would be a cron 
job
that lists images and snapshots and marks them protected while they are in use.

Vish

On Dec 16, 2014, at 3:19 PM, Collins, Sean 
sean_colli...@cable.comcast.commailto:sean_colli...@cable.comcast.com wrote:

 On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote:
 No, I'm looking to prevent images that are in use from being deleted. In
 use and protected are disjoint sets.

 I have seen multiple cases of images (and snapshots) being deleted while
 still in use in Nova, which leads to some very, shall we say,
 interesting bugs and support problems.

 I do think that we should try and determine a way forward on this, they
 are indeed disjoint sets. Setting an image as protected is a proactive
 measure, we should try and figure out a way to keep tenants from
 shooting themselves in the foot if possible.

 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 

Re: [Openstack] [openstack][glance][swift-backend][icehouse]

2014-11-25 Thread Kuvaja, Erno
Hi Subbareddy,

As the logs points out the object you're trying to upload to swift is over the 
maximum object size. Please refer to swift documentation how to increase the 
limitation.


-  Erno (jokke_) Kuvaja

From: Chinasubbareddy M [mailto:chinasubbaredd...@persistent.com]
Sent: 25 November 2014 15:16
To: openstack@lists.openstack.org
Subject: Re: [Openstack] [openstack][glance][swift-backend][icehouse]

Hi,

When I am trying to upload 14 GB image , this is the issue I am getting:

2014-11-25 18:59:14.430 31680 TRACE swiftclient
2014-11-25 18:59:14.438 31680 ERROR glance.store.swift 
[3b26e687-0125-402b-a813-35e2c4d807fa 45e0ca2c6ef746458c4801cb0bb4c048 
22ff902721944cc592c0b4358698089b - - -] Failed to add object to Swift.
Got error from Swift: Object PUT failed: 
http://10.233.52.161:8080:8080/v1/AUTH_9bb0be211a254743be5e6d1497ca969d/glance/e27bb490-859b-429f-abda-664abd786d7c
 413 Request Entity Too Large  [first 60 chars of response] htmlh1Request 
Entity Too Large/h1pThe body of your r
2014-11-25 18:59:14.439 31680 ERROR glance.api.v1.upload_utils 
[3b26e687-0125-402b-a813-35e2c4d807fa 45e0ca2c6ef746458c4801cb0bb4c048 
22ff902721944cc592c0b4358698089b - - -] Failed to upload image 
e27bb490-859b-429f-abda-664abd786d7c
2014-11-25 18:59:14.439 31680 TRACE glance.api.v1.upload_utils Traceback (most 
recent call last):
2014-11-25 18:59:14.439 31680 TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py, line 99, in 
upload_data_to_store
2014-11-25 18:59:14.439 31680 TRACE glance.api.v1.upload_utils store)
2014-11-25 18:59:14.439 31680 TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.7/dist-packages/glance/store/__init__.py, line 382, in 
store_add_to_backend
2014-11-25 18:59:14.439 31680 TRACE glance.api.v1.upload_utils (location, 
size, checksum, metadata) = store.add(image_id, data, size)
2014-11-25 18:59:14.439 31680 TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.7/dist-packages/glance/store/swift.py, line 503, in add
2014-11-25 18:59:14.439 31680 TRACE glance.api.v1.upload_utils raise 
glance.store.BackendException(msg)

From: Chinasubbareddy M
Sent: Tuesday, November 25, 2014 12:33 PM
To: 'openstack@lists.openstack.org'
Subject: RE: [openstack][glance][swift-backend][icehouse]

Hi all,

I configured my glance backend to swift ,while uploading images to my glance am 
not able to upload files which are larger than 10GB, please help me out.
it is taking ages while uploading , please let me know if there are any changes 
which will improve the performance.

Regards,
Subbareddy,
Persistent systems ltd.


From: Chinasubbareddy M
Sent: Monday, November 24, 2014 10:51 PM
To: openstack@lists.openstack.orgmailto:openstack@lists.openstack.org
Cc: Shanker Gudipati
Subject: [openstack][glance][swift-backend][icehouse]

Hi all,

I configured my glance backend to swift ,while uploading images to my glance am 
not able to upload files which are larger than 10GB, please help me out.
it is taking ages while uploading , please let me know if there are any changes 
which will improve the performance.

Regards,
Subbareddy,
Persistent systems ltd.

DISCLAIMER == This e-mail may contain privileged and confidential 
information which is the property of Persistent Systems Ltd. It is intended 
only for the use of the individual or entity to which it is addressed. If you 
are not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Persistent Systems Ltd. does not accept any liability for virus infected mails.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [all] using released versions of python clients in tests

2014-10-28 Thread Kuvaja, Erno
Sean,

Please correct me if I'm wrong, but I think this needs to happen on RCs not on 
CI tests.

Couple of possible problems I personally see with this approach:
1) Extensive pressure to push new client releases (perhaps the released client 
is not as good as intended for just to provide someone tools to get through 
tests).
2) Un-necessary slowing of development. If the needed client functionality is 
merged but not released, the commits using this functionality will fail. This 
IMO fights against the point of having CI as we're still depending on internal 
releases during the development process.
3) More skipped tests waiting for client release and not catching the real 
issues.
4) over time LIBS_FROM_GIT just cumulates having all used clients rendering the 
effort useless anyways.

I do agree that we need to catch the scenarios driving you towards this on 
whatever we call Stable, but anything out of that should not be affected just 
because project does not release monthly, weekly or daily client versions.

I might have missed something here, but I just don't see the correlation  of 
unreleased server depending unreleased client being problem.

- Erno

 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 28 October 2014 12:29
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [all] using released versions of python clients in
 tests
 
 At the beginning of the month we moved through a set of patches for oslo
 libs that decoupled them from the integrated gate by testing server projects
 with released versions of oslo libraries.
 
 The way it works is that in the base devstack case all the oslo libraries are
 pulled from pypi instead of git. There is an override LIBS_FROM_GIT that lets
 you specify you want certain libraries from git instead.
 
 * on a Nova change oslo.config comes from the release pypi version.
 * on an olso.config change we test a few devstack configurations with
 LIBS_FROM_GIT=oslo.config, so that we can ensure that proposed
 olso.config changes won't break everyone.
 
 I believe we should do the same with all the python-*client libraries as well.
 That will ensure that servers don't depend on unreleased features of python
 client libraries, and will provide the forward testing to ensure the next
 version of the python client to be released won't ruin the world.
 
 This is mostly a heads up that I'm going to start doing this implementation. 
 If
 someone wants to raise an objection, now is the time.
 However I think breaking this master/master coupling of servers and clients
 is important, and makes OpenStack function and upgrade a bit closer to what
 people expect.
 
   -Sean
 
 --
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][all] Help with interpreting the log level guidelines

2014-09-16 Thread Kuvaja, Erno
 -Original Message-
 From: Flavio Percoco [mailto:fla...@redhat.com]
 Sent: 16 September 2014 10:08
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [glance][all] Help with interpreting the log 
 level
 guidelines
 
 On 09/16/2014 01:10 AM, Clint Byrum wrote:
  Excerpts from Sean Dague's message of 2014-09-15 16:02:04 -0700:
  On 09/15/2014 07:00 PM, Mark Washenberger wrote:
  Hi there logging experts,
 
  We've recently had a little disagreement in the glance team about
  the appropriate log levels for http requests that end up failing due
  to user errors. An example would be a request to get an image that
  does not exist, which results in a 404 Not Found request.
 
  On one hand, this event is an error, so DEBUG or INFO seem a little
  too low. On the other hand, this error doesn't generally require any
  kind of operator investigation or indicate any actual failure of the
  service, so perhaps it is excessive to log it at WARN or ERROR.
 
  Please provide feedback to help us resolve this dispute if you feel you
 can!
 
  My feeling is this is an INFO level. There is really nothing the
  admin should care about here.
 
  Agree with Sean. INFO are useful for investigations. WARN and ERROR
  are cause for alarm.
 
 +1 this is what we do in Zaqar as well.
 
 
 --
 @flaper87
 Flavio Percoco
 

I think the debate here does not only limit to 404s. By the logging guidelines 
INFO level messages should not contain any error related messages but rather 
stuff like certain components starting/stopping, config info, etc. WARN should 
not be anything that gets the ops pulled out of bed and so on. 

Also all information that would be in interest of ops should be logged INFO+.

Now if we are logging user errors as WARN that makes the environment 
supportable even if the logging has been set as high as WARN cleaning the 
output a lot (as INFO shouldn't contain anything out of order anyways). Current 
situation is that logging at DEBUG level is the only option to get the needed 
information to actually run the services and get the data needed to support it 
as well. If we log user errors on INFO we get one step higher but we still have 
all that clutter like every single request in the logs and if that's the 
direction we want to go, we should revisit our logging guidelines as well.

Thus my two euro cents goes towards WARN rather than debug and definitely not 
INFO.

- Erno (jokke) Kuvaja

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][all] Help with interpreting the log level guidelines

2014-09-16 Thread Kuvaja, Erno
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 16 September 2014 12:40
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [glance][all] Help with interpreting the log 
 level
 guidelines
 
 On 09/16/2014 06:44 AM, Kuvaja, Erno wrote:
  -Original Message-
  From: Flavio Percoco [mailto:fla...@redhat.com]
  Sent: 16 September 2014 10:08
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [glance][all] Help with interpreting the
  log level guidelines
 
  On 09/16/2014 01:10 AM, Clint Byrum wrote:
  Excerpts from Sean Dague's message of 2014-09-15 16:02:04 -0700:
  On 09/15/2014 07:00 PM, Mark Washenberger wrote:
  Hi there logging experts,
 
  We've recently had a little disagreement in the glance team about
  the appropriate log levels for http requests that end up failing
  due to user errors. An example would be a request to get an image
  that does not exist, which results in a 404 Not Found request.
 
  On one hand, this event is an error, so DEBUG or INFO seem a
  little too low. On the other hand, this error doesn't generally
  require any kind of operator investigation or indicate any actual
  failure of the service, so perhaps it is excessive to log it at WARN or
 ERROR.
 
  Please provide feedback to help us resolve this dispute if you
  feel you
  can!
 
  My feeling is this is an INFO level. There is really nothing the
  admin should care about here.
 
  Agree with Sean. INFO are useful for investigations. WARN and ERROR
  are cause for alarm.
 
  +1 this is what we do in Zaqar as well.
 
 
  --
  @flaper87
  Flavio Percoco
 
 
  I think the debate here does not only limit to 404s. By the logging 
  guidelines
 INFO level messages should not contain any error related messages but
 rather stuff like certain components starting/stopping, config info, etc.
 WARN should not be anything that gets the ops pulled out of bed and so on.
 
  Also all information that would be in interest of ops should be logged
 INFO+.
 
  Now if we are logging user errors as WARN that makes the environment
 supportable even if the logging has been set as high as WARN cleaning the
 output a lot (as INFO shouldn't contain anything out of order anyways).
 Current situation is that logging at DEBUG level is the only option to get the
 needed information to actually run the services and get the data needed to
 support it as well. If we log user errors on INFO we get one step higher but
 we still have all that clutter like every single request in the logs and if 
 that's
 the direction we want to go, we should revisit our logging guidelines as well.
 
  Thus my two euro cents goes towards WARN rather than debug and
 definitely not INFO.
 
 Part of it is how often you expect things to happen as well. Remember
 glanceclient opperates in the context of other processes. When it hits a 404
 in Glance, it's not running in the glance context, it's running in the Nova
 context. Which means it needs to think of itself in that context.
 
 In that context we got the exception back from Glance, we know the image
 wasn't there. And we know whether or not that's a problem (glanceclient
 actually has no idea if it's a problem or not, we might be checking to make
 sure a thing isn't there, and success for us is the 404).
 
 So actually, I'm back to Jay on this, it should be DEBUG. Nova (or whoever the
 caller is) can decide if the issue warents something larger than that.
 
 This is really the biggest issue with logging in the clients, people don't 
 think
 about the context that they are running in.
 
   -Sean
 
 --
 Sean Dague
 http://dague.net
 

Sean,

I'm not sure if we were specific enough here. Not talking about client but the 
server logging. So how we should log events like client trying to change 
protected properties, access non existing image, create duplicate image IDs, 
etc.

So for example if Nova is trying to access image that does not exist should we 
ignore it on Glance side or when the user tries to do something that does not 
succeed. In my point of view it makes life much easier if we have information 
where the request failed rather than just a wsgi return code or having to run 
the system on DEBUG logging to get that information.

- Erno

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][all] Help with interpreting the log level guidelines

2014-09-16 Thread Kuvaja, Erno
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 16 September 2014 15:56
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [glance][all] Help with interpreting the log 
 level
 guidelines
 
 On 09/16/2014 10:16 AM, Kuvaja, Erno wrote:
  -Original Message-
  From: Sean Dague [mailto:s...@dague.net]
  Sent: 16 September 2014 12:40
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [glance][all] Help with interpreting the
  log level guidelines
 
  On 09/16/2014 06:44 AM, Kuvaja, Erno wrote:
  -Original Message-
  From: Flavio Percoco [mailto:fla...@redhat.com]
  Sent: 16 September 2014 10:08
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [glance][all] Help with interpreting
  the log level guidelines
 
  On 09/16/2014 01:10 AM, Clint Byrum wrote:
  Excerpts from Sean Dague's message of 2014-09-15 16:02:04 -0700:
  On 09/15/2014 07:00 PM, Mark Washenberger wrote:
  Hi there logging experts,
 
  We've recently had a little disagreement in the glance team
  about the appropriate log levels for http requests that end up
  failing due to user errors. An example would be a request to get
  an image that does not exist, which results in a 404 Not Found
 request.
 
  On one hand, this event is an error, so DEBUG or INFO seem a
  little too low. On the other hand, this error doesn't generally
  require any kind of operator investigation or indicate any
  actual failure of the service, so perhaps it is excessive to log
  it at WARN or
  ERROR.
 
  Please provide feedback to help us resolve this dispute if you
  feel you
  can!
 
  My feeling is this is an INFO level. There is really nothing the
  admin should care about here.
 
  Agree with Sean. INFO are useful for investigations. WARN and
  ERROR are cause for alarm.
 
  +1 this is what we do in Zaqar as well.
 
 
  --
  @flaper87
  Flavio Percoco
 
 
  I think the debate here does not only limit to 404s. By the logging
  guidelines
  INFO level messages should not contain any error related messages but
  rather stuff like certain components starting/stopping, config info, etc.
  WARN should not be anything that gets the ops pulled out of bed and so
 on.
 
  Also all information that would be in interest of ops should be
  logged
  INFO+.
 
  Now if we are logging user errors as WARN that makes the environment
  supportable even if the logging has been set as high as WARN cleaning
  the output a lot (as INFO shouldn't contain anything out of order
 anyways).
  Current situation is that logging at DEBUG level is the only option
  to get the needed information to actually run the services and get
  the data needed to support it as well. If we log user errors on INFO
  we get one step higher but we still have all that clutter like every
  single request in the logs and if that's the direction we want to go, we
 should revisit our logging guidelines as well.
 
  Thus my two euro cents goes towards WARN rather than debug and
  definitely not INFO.
 
  Part of it is how often you expect things to happen as well. Remember
  glanceclient opperates in the context of other processes. When it
  hits a 404 in Glance, it's not running in the glance context, it's
  running in the Nova context. Which means it needs to think of itself in
 that context.
 
  In that context we got the exception back from Glance, we know the
  image wasn't there. And we know whether or not that's a problem
  (glanceclient actually has no idea if it's a problem or not, we might
  be checking to make sure a thing isn't there, and success for us is the 
  404).
 
  So actually, I'm back to Jay on this, it should be DEBUG. Nova (or
  whoever the caller is) can decide if the issue warents something larger
 than that.
 
  This is really the biggest issue with logging in the clients, people
  don't think about the context that they are running in.
 
 -Sean
 
  --
  Sean Dague
  http://dague.net
 
 
  Sean,
 
  I'm not sure if we were specific enough here. Not talking about client but
 the server logging. So how we should log events like client trying to change
 protected properties, access non existing image, create duplicate image IDs,
 etc.
 
  So for example if Nova is trying to access image that does not exist should
 we ignore it on Glance side or when the user tries to do something that does
 not succeed. In my point of view it makes life much easier if we have
 information where the request failed rather than just a wsgi return code or
 having to run the system on DEBUG logging to get that information.
 
 Glance client throws an ERROR on 404 from Glance server -
 http://logs.openstack.org/81/120781/4/check/check-tempest-dsvm-
 full/90cb640/logs/screen-n-api.txt.gz?level=ERROR
 
 Glance server does not -
 http://logs.openstack.org/81/120781/4/check/check-tempest-dsvm-
 full/90cb640/logs/screen-g-api.txt.gz?level=ERROR
 
 Which is why I assumed this was where the conversation started

Re: [openstack-dev] [glance][all] Help with interpreting the log level guidelines

2014-09-16 Thread Kuvaja, Erno
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 16 September 2014 17:31
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [glance][all] Help with interpreting the log 
 level
 guidelines
 
 On 09/16/2014 12:07 PM, Kuvaja, Erno wrote:
  -Original Message-
  From: Sean Dague [mailto:s...@dague.net]
  Sent: 16 September 2014 15:56
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [glance][all] Help with interpreting the
  log level guidelines
 
  On 09/16/2014 10:16 AM, Kuvaja, Erno wrote:
  -Original Message-
  From: Sean Dague [mailto:s...@dague.net]
  Sent: 16 September 2014 12:40
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [glance][all] Help with interpreting
  the log level guidelines
 
  On 09/16/2014 06:44 AM, Kuvaja, Erno wrote:
  -Original Message-
  From: Flavio Percoco [mailto:fla...@redhat.com]
  Sent: 16 September 2014 10:08
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [glance][all] Help with interpreting
  the log level guidelines
 
  On 09/16/2014 01:10 AM, Clint Byrum wrote:
  Excerpts from Sean Dague's message of 2014-09-15 16:02:04 -0700:
  On 09/15/2014 07:00 PM, Mark Washenberger wrote:
  Hi there logging experts,
 
  We've recently had a little disagreement in the glance team
  about the appropriate log levels for http requests that end up
  failing due to user errors. An example would be a request to
  get an image that does not exist, which results in a 404 Not
  Found
  request.
 
  On one hand, this event is an error, so DEBUG or INFO seem a
  little too low. On the other hand, this error doesn't
  generally require any kind of operator investigation or
  indicate any actual failure of the service, so perhaps it is
  excessive to log it at WARN or
  ERROR.
 
  Please provide feedback to help us resolve this dispute if you
  feel you
  can!
 
  My feeling is this is an INFO level. There is really nothing
  the admin should care about here.
 
  Agree with Sean. INFO are useful for investigations. WARN and
  ERROR are cause for alarm.
 
  +1 this is what we do in Zaqar as well.
 
 
  --
  @flaper87
  Flavio Percoco
 
 
  I think the debate here does not only limit to 404s. By the
  logging guidelines
  INFO level messages should not contain any error related messages
  but rather stuff like certain components starting/stopping, config info,
 etc.
  WARN should not be anything that gets the ops pulled out of bed and
  so
  on.
 
  Also all information that would be in interest of ops should be
  logged
  INFO+.
 
  Now if we are logging user errors as WARN that makes the
  environment
  supportable even if the logging has been set as high as WARN
  cleaning the output a lot (as INFO shouldn't contain anything out
  of order
  anyways).
  Current situation is that logging at DEBUG level is the only option
  to get the needed information to actually run the services and get
  the data needed to support it as well. If we log user errors on
  INFO we get one step higher but we still have all that clutter like
  every single request in the logs and if that's the direction we
  want to go, we
  should revisit our logging guidelines as well.
 
  Thus my two euro cents goes towards WARN rather than debug and
  definitely not INFO.
 
  Part of it is how often you expect things to happen as well.
  Remember glanceclient opperates in the context of other
  processes. When it hits a 404 in Glance, it's not running in the
  glance context, it's running in the Nova context. Which means it
  needs to think of itself in
  that context.
 
  In that context we got the exception back from Glance, we know the
  image wasn't there. And we know whether or not that's a problem
  (glanceclient actually has no idea if it's a problem or not, we
  might be checking to make sure a thing isn't there, and success for us is
 the 404).
 
  So actually, I'm back to Jay on this, it should be DEBUG. Nova (or
  whoever the caller is) can decide if the issue warents something
  larger
  than that.
 
  This is really the biggest issue with logging in the clients,
  people don't think about the context that they are running in.
 
   -Sean
 
  --
  Sean Dague
  http://dague.net
 
 
  Sean,
 
  I'm not sure if we were specific enough here. Not talking about
  client but
  the server logging. So how we should log events like client trying to
  change protected properties, access non existing image, create
  duplicate image IDs, etc.
 
  So for example if Nova is trying to access image that does not exist
  should
  we ignore it on Glance side or when the user tries to do something
  that does not succeed. In my point of view it makes life much easier
  if we have information where the request failed rather than just a
  wsgi return code or having to run the system on DEBUG logging to get that
 information.
 
  Glance client throws an ERROR on 404 from Glance server -
  http

Re: [openstack-dev] [glance][all] Help with interpreting the log level guidelines

2014-09-16 Thread Kuvaja, Erno


 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: 16 September 2014 18:10
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [glance][all] Help with interpreting the log 
 level
 guidelines
 
 On 09/16/2014 10:16 AM, Kuvaja, Erno wrote:
   In my point of view it makes life
  much easier if we have information where the request failed
 
 The request did not fail. The HTTP request succeeded and Glance returned a
 404 Not Found. If the caller was expecting an image to be there, but it 
 wasn't,
 then it can log the 404 in whatever log level is most appropriate.
 
 The point is that DEBUG log level is appropriate for the glanceclient logs, 
 since
 the glanceclient doesn't know if a 404 is something to be concerned about or
 not. To glanceclient, the call succeeded.
 Communication with the Glance API server worked, authentication worked,
 and the server returned successfully stating that the image does not exist.
 
 -jay
 

Still this is not about glanceclient logging. On that discussion I fully agree 
that less is more what comes to logging.

When we try to update an image in the glance code and that fails because the 
image is not there, I do not care where that gets stated to the end user. What 
I care about is that when the user starts asking what happened, I don't get 
called up from the bed because the ops responsible for the service have no 
idea. I also care that the ops does not need to run through million lines of 
debugging logs just because they would not get the info without. The reality is 
after all that even in developer point of view the request did not fail, user 
point of view it did.

We must keep in mind that somewhere out there is bunch of people using these 
services outside of devstack who does not know the code and how it behaves 
internally. They see the log messages if any and need to try to get the answers 
for the people who knows even less about the internals.

- Erno

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance][FFE] Refactoring Glance Logging

2014-09-08 Thread Kuvaja, Erno
All,

There is two changes still not landed from 
https://blueprints.launchpad.net/glance/+spec/refactoring-glance-logging

https://review.openstack.org/116626

and

https://review.openstack.org/#/c/117204/

Merge of the changes was delayed over J3 to avoid any potential merge 
conflicts. There was minor change made (couple of LOG.exceptions changed to 
LOG.error based on the review feedback) when rebased.

I would like to request Feature Freeze Exception if needed to finish the Juno 
Logging refactoring and getting these two changes merged in.

BR,
Erno (jokke_) Kuvaja
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance][Nova][All] requests 2.4.0 breaks glanceclient

2014-09-03 Thread Kuvaja, Erno
Hi All,

While investigating glanceclient gating issues we narrowed it down to requests 
2.4.0 which was released 2014-08-29. Urllib3 seems to be raising new 
ProtocolError which does not get catched and breaks at least glanceclient.
Following error can be seen on console ProtocolError: ('Connection aborted.', 
gaierror(-2, 'Name or service not known')).

Unfortunately we hit on such issue just under the freeze. Apparently this 
breaks novaclient as well and there is change 
(https://review.openstack.org/#/c/118332/ )proposed to requirements to limit 
the version 2.4.0.

Is there any other projects using requirements and seeing issues with the 
latest version?


-  Erno (jokke_) Kuvaja

kuv...@hp.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][feature freeze exception] Proposal for using Launcher/ProcessLauncher for launching services

2014-09-03 Thread Kuvaja, Erno
In principle I like the idea and concept, a lot. In practice I don't think 
glance code is ready in a state that we could say SIGHUP reloads our configs. 
Even more my concern is that based on the behavior seen some config options 
gets picked up and some does not.

As long as we do not have definite list documented what options gets updated on 
the fly and what is the actual behavior at the point when new config is picked 
up (lets say we have some locks in place and locking folder gets updated, what 
happens?) I don't think we should be taking this functionality in. Even the 
current behavior is fundamentally broken, at least it's broken in a way that it 
is consistent and the behavior is known.


-  Erno (jokke_) Kuvaja

kuv...@hp.com

From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
Sent: 03 September 2014 14:39
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance][feature freeze exception] Proposal for 
using Launcher/ProcessLauncher for launching services

Hi All,

Please give your support me for applying the  freeze exception for using 
oslo-incubator service framework in glance, based on the following blueprint:

https://blueprints.launchpad.net/glance/+spec/use-common-service-framework

I have ensured that after making these changes everything is working smoothly.

I have done the functional testing for following three scenarios:

1.   Enabled SSL and checked requests are processed by the Api service 
before and after SIGHUP signal

2.   Disabled SSL and  checked requests are processed by the Api service 
before and after SIGHUP signal

3.   I have also ensured reloading of the parameters like 
ilesystem_store_datadir, filesystem_store_datadirs are  working effectively 
after sending the SIGHUP signal.

To test 1st and 2nd I have created a python script which will send multiple 
requests to glance at a time and added a chron job to send a SIGHUP signal to 
the parent process.
I have tested above script for 1 hour and confirmed every request has been 
processed successfully.

Please consider this feature to be a part of Juno release.



Thanks  Regards,

Abhishek Kekane


From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
Sent: 02 September 2014 19:11
To: OpenStack Development Mailing List 
(openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [glance][feature freeze exception] Proposal for using 
Launcher/ProcessLauncher for launching services

Hi All,

I'd like to ask for a feature freeze exception for using oslo-incubator service 
framework in glance, based on the following blueprint:

https://blueprints.launchpad.net/glance/+spec/use-common-service-framework


The code to implement this feature is under review at present.

1. Sync oslo-incubator service module in glance: 
https://review.openstack.org/#/c/117135/2
2. Use Launcher/ProcessLauncher in glance: 
https://review.openstack.org/#/c/117988/


If we have this feature in glance then we can able to use features like reload 
glance configuration file without restart, graceful shutdown etc.
Also it will use common code like other OpenStack projects nova, keystone, 
cinder does.


We are ready to address all the concerns of the community if they have any.


Thanks  Regards,

Abhishek Kekane

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data. If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data. If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt

2014-09-03 Thread Kuvaja, Erno
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 03 September 2014 13:37
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt
 
 I'm not sure why people keep showing up with sort requirements patches
 like - https://review.openstack.org/#/c/76817/6, however, they do.
 
 All of these need to be -2ed with predjudice.
 
 requirements.txt is not a declarative interface. The order is important as pip
 processes it in the order it is. Changing the order has impacts on the overall
 integration which can cause wedges later.
 
 So please stop.
 
   -Sean
 
 --
 Sean Dague
 http://dague.net


Hi Sean  all,

Could you please open this up a little bit? What are we afraid breaking 
regarding the order of these requirements? I tried to go through pip 
documentation but I could not find reason of specific order of the lines, 
references to keep the order there was 'though.

I'm now assuming one thing here as I do not know if that's the case. None of 
the packages enables/disables functionality depending of what has been 
installed on the system before, but they have their own dependencies to provide 
those. Based on this assumption I can think of only one scenario causing us 
issues. That is us abusing the example in point 2 of 
https://pip.pypa.io/en/latest/user_guide.html#requirements-files meaning; we 
install package X depending on package Y=1.0,2.0 before installing package Z 
depending on Y=1.0 to ensure that package Y2.0 without pinning package Y in 
our requirements.txt. I certainly hope that this is not the case as depending 
3rd party vendor providing us specific version of dependency package would be 
extremely stupid.

Other than that I really don't know how the order could cause us issues, but I 
would be really happy to learn something new today if that is the case or if my 
assumption went wrong.

Best Regards,
Erno (jokke_) Kuvaja
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt

2014-09-03 Thread Kuvaja, Erno
 -Original Message-
 From: Clark Boylan [mailto:cboy...@sapwetik.org]
 Sent: 03 September 2014 20:10
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt
 
 
 
 On Wed, Sep 3, 2014, at 11:51 AM, Kuvaja, Erno wrote:
   -Original Message-
   From: Sean Dague [mailto:s...@dague.net]
   Sent: 03 September 2014 13:37
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: [openstack-dev] [all] [glance] do NOT ever sort
   requirements.txt
  
   I'm not sure why people keep showing up with sort requirements
   patches like - https://review.openstack.org/#/c/76817/6, however, they
 do.
  
   All of these need to be -2ed with predjudice.
  
   requirements.txt is not a declarative interface. The order is
   important as pip processes it in the order it is. Changing the order
   has impacts on the overall integration which can cause wedges later.
  
   So please stop.
  
 -Sean
  
   --
   Sean Dague
   http://dague.net
  
 
  Hi Sean  all,
 
  Could you please open this up a little bit? What are we afraid
  breaking regarding the order of these requirements? I tried to go
  through pip documentation but I could not find reason of specific
  order of the lines, references to keep the order there was 'though.
 
  I'm now assuming one thing here as I do not know if that's the case.
  None of the packages enables/disables functionality depending of what
  has been installed on the system before, but they have their own
  dependencies to provide those. Based on this assumption I can think of
  only one scenario causing us issues. That is us abusing the example in
  point 2 of
  https://pip.pypa.io/en/latest/user_guide.html#requirements-files
  meaning; we install package X depending on package Y=1.0,2.0 before
  installing package Z depending on Y=1.0 to ensure that package Y2.0
  without pinning package Y in our requirements.txt. I certainly hope
  that this is not the case as depending 3rd party vendor providing us 
  specific
 version of dependency package would be extremely stupid.
 
  Other than that I really don't know how the order could cause us
  issues, but I would be really happy to learn something new today if
  that is the case or if my assumption went wrong.
 
  Best Regards,
  Erno (jokke_) Kuvaja
 
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 The issue is described in the bug that Josh linked
 (https://github.com/pypa/pip/issues/988). Basically pip doesn't do
 dependency resolution in a way that lets you treat requirements as order
 independent. For that to be the case pip would have to evaluate all
 dependencies together then install the intersection of those dependencies.
 Instead it iterates over the list(s) in order and evaluates each dependency as
 it is found.
 
 Your example basically describes where this breaks. You can both depend on
 the same dependency at different versions and pip will install a version that
 satisfies only one of the dependencies and not the other leading to a failed
 install. However I think a more common case is that openstack will pin a
 dependency and say Y=1.0,2.0 and the X dependency will say Y=1.0. If
 the X dependency comes first you get version 2.5 which is not valid for your
 specification of Y=1.0,2.0 and pip fails.
 You fix this by listing Y before X dependency that installs Y with less 
 restrictive
 boundaries.
 
 Another example of a slightly different failure would be hacking, flake8,
 pep8, and pyflakes. Hacking installs a specific version of flake8, pep8, and
 pyflakes so that we do static lint checking with consistent checks each
 release. If you sort this list alphabetically instead of allowing hacking to 
 install
 its deps flake8 will come first and you can get a different version of pep8.
 Different versions of pep8 check different things and now the gate has
 broken.
 
 The most problematic thing is you can't count on your dependencies from
 not breaking you if they come first (because they are evaluated first).
 So in cases where we know order is important (hacking and pbr and probably
 a handful of others) we should be listing them as early as possible in the
 requirements.
 
 Clark
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks Clark,

To be honest the issue nor your explanation did clarify this to me. Please 
forgive me hunting this, but it seems to be extremely important topic so I 
would like to understand where it's coming from (and hopefully others will 
benefit from

Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt

2014-09-03 Thread Kuvaja, Erno
 -Original Message-
 From: Clark Boylan [mailto:cboy...@sapwetik.org]
 Sent: 03 September 2014 21:57
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt
 
 
 
 On Wed, Sep 3, 2014, at 01:06 PM, Kuvaja, Erno wrote:
   -Original Message-
   From: Clark Boylan [mailto:cboy...@sapwetik.org]
   Sent: 03 September 2014 20:10
   To: openstack-dev@lists.openstack.org
   Subject: Re: [openstack-dev] [all] [glance] do NOT ever sort
   requirements.txt
  
  
  
   On Wed, Sep 3, 2014, at 11:51 AM, Kuvaja, Erno wrote:
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 03 September 2014 13:37
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [all] [glance] do NOT ever sort
 requirements.txt

 I'm not sure why people keep showing up with sort requirements
 patches like - https://review.openstack.org/#/c/76817/6,
 however, they
   do.

 All of these need to be -2ed with predjudice.

 requirements.txt is not a declarative interface. The order is
 important as pip processes it in the order it is. Changing the
 order has impacts on the overall integration which can cause wedges
 later.

 So please stop.

   -Sean

 --
 Sean Dague
 http://dague.net

   
Hi Sean  all,
   
Could you please open this up a little bit? What are we afraid
breaking regarding the order of these requirements? I tried to go
through pip documentation but I could not find reason of specific
order of the lines, references to keep the order there was 'though.
   
I'm now assuming one thing here as I do not know if that's the case.
None of the packages enables/disables functionality depending of
what has been installed on the system before, but they have their
own dependencies to provide those. Based on this assumption I can
think of only one scenario causing us issues. That is us abusing
the example in point 2 of
https://pip.pypa.io/en/latest/user_guide.html#requirements-files
meaning; we install package X depending on package Y=1.0,2.0
before installing package Z depending on Y=1.0 to ensure that
package Y2.0 without pinning package Y in our requirements.txt. I
certainly hope that this is not the case as depending 3rd party
vendor providing us specific
   version of dependency package would be extremely stupid.
   
Other than that I really don't know how the order could cause us
issues, but I would be really happy to learn something new today
if that is the case or if my assumption went wrong.
   
Best Regards,
Erno (jokke_) Kuvaja
   
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de
 v
   
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
   The issue is described in the bug that Josh linked
   (https://github.com/pypa/pip/issues/988). Basically pip doesn't do
   dependency resolution in a way that lets you treat requirements as
   order independent. For that to be the case pip would have to
   evaluate all dependencies together then install the intersection of those
 dependencies.
   Instead it iterates over the list(s) in order and evaluates each
   dependency as it is found.
  
   Your example basically describes where this breaks. You can both
   depend on the same dependency at different versions and pip will
   install a version that satisfies only one of the dependencies and
   not the other leading to a failed install. However I think a more
   common case is that openstack will pin a dependency and say
   Y=1.0,2.0 and the X dependency will say Y=1.0. If the X
   dependency comes first you get version 2.5 which is not valid for your
 specification of Y=1.0,2.0 and pip fails.
   You fix this by listing Y before X dependency that installs Y with
   less restrictive boundaries.
  
   Another example of a slightly different failure would be hacking,
   flake8, pep8, and pyflakes. Hacking installs a specific version of
   flake8, pep8, and pyflakes so that we do static lint checking with
   consistent checks each release. If you sort this list alphabetically
   instead of allowing hacking to install its deps flake8 will come first 
   and you
 can get a different version of pep8.
   Different versions of pep8 check different things and now the gate
   has broken.
  
   The most problematic thing is you can't count on your dependencies
   from not breaking you if they come first (because they are evaluated
 first).
   So in cases where we know order is important (hacking and pbr and
   probably a handful of others) we should

Re: [openstack-dev] [glance] Bug Days - July 15/16

2014-07-15 Thread Kuvaja, Erno
Good Morning/Day,

I have just started Etherpad 
https://etherpad.openstack.org/p/glance-bugday-progress for tracking. Please 
also join to #openstack-glance @Freenode to join the chat. All hands onboard 
would be appreciated!

BR,
Erno

From: Fei Long Wang [mailto:feil...@catalyst.net.nz]
Sent: 11 July 2014 03:43
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Bug Days - July 15/16

As for the bug tagging, I just added some tags in the wiki for Glance: 
https://wiki.openstack.org/wiki/Bug_Tags#Glance, please feel free to complete 
it.

On 11/07/14 12:07, Arnaud Legendre wrote:
Hi All,

Glance is going to have bug days next week on July Tuesday 15 and Wednesday 16. 
This is a two-days bug day to accommodate most of the Glance contributors (time 
zones, etc.). Of course, You do not need to be 100% two days…

This is a great opportunity to:
- triage bugs
- fix bugs
- tag bugs
and hopefully make Glance better…!

Please register yourself on the wiki page if you plan to participate:
https://etherpad.openstack.org/p/glance-bug-day

We will be hanging out on #openstack-glance. We seriously need help so even the 
bare minimum will be better than nothing…!

A couple of links:
- All Glance bugs: https://bugs.launchpad.net/glance
- Bugs that have gone stale: 
https://bugs.launchpad.net/glance/+bugs?orderby=date_last_updatedfield.status%3Alist=INPROGRESSassignee_option=anyhttps://bugs.launchpad.net/glance/+bugs?orderby=date_last_updatedfield.status:list=INPROGRESSassignee_option=any
- Untriaged bugs: 
https://bugs.launchpad.net/glance/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status:list=NEWassignee_option=anyfield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=fie
 ld.tags_c 
ombinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on
 
https://bugs.launchpad.net/glance/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status:list=NEWassignee_option=anyfield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on
- Bugs without owners: 
https://bugs.launchpad.net/glance/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status%3Alist=NEWfield.status%3Alist=CONFIRMEDfield.status%3Alist=TRIAGEDfi
 eld.statu 
s%3Alist=INPROGRESSassignee_option=nonefield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=onhttps://bugs.launchpad.net/glance/+bugs?field.searchtext=orderby=-importancesearch=Searchfield.status:list=NEWfield.status:list=CONFIRMEDfield.status:list=TRIAGEDfield.status:list=INPROGRESSassignee_option=nonefield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on


Cheers,
Arnaud




___

OpenStack-dev mailing list

OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Cheers  Best regards,

Fei Long Wang (王飞龙)

--

Senior Cloud Software Engineer

Tel: +64-48032246

Email: flw...@catalyst.net.nzmailto:flw...@catalyst.net.nz

Catalyst IT Limited

Level 6, Catalyst House, 150 Willis Street, Wellington

--
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All] Removing translations from debug logging where exception formatted into the message

2014-06-26 Thread Kuvaja, Erno
Hi,

We hit nasty situation where _() was removed from DEBUG level logging and there 
is exception included like following:
msg = Forbidden upload attempt: %s % e
This caused gettextutils raising UnicodeError:
2014-06-26 18:16:24.221 |   File glance/openstack/common/gettextutils.py, 
line 333, in __str__
2014-06-26 18:16:24.222 | raise UnicodeError(msg)
2014-06-26 18:16:24.222 | UnicodeError: Message objects do not support str() 
because they may contain non-ascii characters. Please use unicode() or 
translate() instead.
(For Example 
http://logs.openstack.org/63/102863/1/check/gate-glance-python27/6ad16a3/console.html#_2014-06-26_15_57_12_262)

As discussed with mriedm, jecarey and dhellmann on #openstack-oslo this can be 
avoided by specifying the message being unicode like:
msg = uForbidden upload attempt: %s % e

For us in Glance it caused bunch of gating issues, so hopefully this helps the 
rest of the projects avoiding same, or at least tackling it bit faster.


-  Erno (jokke_) Kuvaja
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Usage of _L?() translation functions

2014-06-20 Thread Kuvaja, Erno
Hi Matt,

Is it perhaps this one you're looking for:
https://wiki.openstack.org/wiki/LoggingStandards#Guidelines

 -Erno

 -Original Message-
 From: Matthew Booth [mailto:mbo...@redhat.com]
 Sent: 20 June 2014 14:46
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [nova] Usage of _L?() translation functions
 
 I read a doc on this the other day, but for the life of me I can't remember
 where. It's relevant to this review:
 
 https://review.openstack.org/#/c/97612/18/nova/virt/vmwareapi/volumeo
 ps.py
 
 If anybody knowledgeable could give this a glance I'd be grateful. I'd like to
 know I'm not giving duff advice.
 
 Thanks,
 
 Matt
 --
 Matthew Booth
 Red Hat Engineering, Virtualisation Team
 
 Phone: +442070094448 (UK)
 GPG ID:  D33C3490
 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Unifying configuration file

2014-06-18 Thread Kuvaja, Erno
 -Original Message-
 From: Mark McLoughlin [mailto:mar...@redhat.com]
 Sent: 18 June 2014 06:58
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [glance] Unifying configuration file
 
 Hey
 
 On Tue, 2014-06-17 at 17:43 +0200, Julien Danjou wrote:
  On Tue, Jun 17 2014, Arnaud Legendre wrote:
 
   @ZhiYan: I don't like the idea of removing the sample configuration
   file(s) from the git repository. Many people do not want to have to
   checkout the entire codebase and tox every time they have to verify
   a variable name in a configuration file. I know many people who were
   really frustrated where they realized that the sample config file was gone
 from the Nova repo.
   However, I agree with the fact that it would be better if the sample
   was 100% accurate: so the way I would love to see this working is to
   generate the sample file every time there is a config change (this
   being totally automated (maybe at the gate level...)).
 
  You're a bit late on this. :)
  So what I did these last months (year?) in several project, is to
  check at gate time the configuration file that is automatically
  generated against what's in the patches.
  That turned out to be a real problem because sometimes some options
  changes from the eternal module we rely on (e.g. keystone authtoken or
  oslo.messaging). In the end many projects (like Nova) disabled this
  check altogether, and therefore removed the generated configuration
  file From the git repository.

Yes and the users who relied on those config files in the github were really 
upset about that.
 
 For those that casually want to refer to the sample config, what would help if
 there was Jenkins jobs to publish the generated sample config file
 somewhere.
 
 For people installing the software, it would probably be nice if pbr added
 'python setup.py sample_config' or something.
 
   @Julien: I would be interested to understand the value that you see
   of having only one config file? At this point, I don't see why
   managing one file is more complicated than managing several files
   especially when they are organized by categories. Also, scrolling
   through the registry settings every time I want to modify an api setting
 seem to add some overhead.
 
  Because there's no way to automatically generate several configuration
  files with each its own set of options using oslo.config.
 
 I think that's a failing of oslo.config, though. Glance's layout of config 
 files is
 useful and intuitive.

I totally agree.
 
  Glance is (one of?) the last project in OpenStack to manually write
  its sample configuration file, which are not up to date obviously.

We can learn from others, can't we? 
I think the key point here is part of the comment in the Cinder discussion to 
remove the sample config  https://review.openstack.org/#/c/96581/ (Mathieu Jun 
6 7:53 PM):

Note that it's not just about Cinder, its about all the other projects. They 
[ops; Erno add] don't track changes in Gerrit, its not something they do in 
their day-to-day job. It just happened that we heard about this change on the 
openstack-operators mailinglist.


We are again having this discussion on the dev list without involving the most 
important group, our ops. 
 
 Neutron too, but not split out per-service. I don't find Neutron's config file
 layout as intuitive.
 
  So really this is mainly about following what every other projects did
  the last year(s).
 
 There's a balance here between what makes technical sense and what helps
 users. If Glance has support for generating a unified config file while also
 manually maintaining the split configs, I think that's a fine compromise.


+100
I'd add to that: until olso.config can provide us more than one config file 
generated automatically. Now lets remember that if you run glance with the 
registry on somewhere else than devstack you are most probably running registry 
and API on different servers. Making them relying on single config is about as 
smart idea as saying that we should not provide project independent configs, 
but combine all the configs to a single OpenStack config file with nice note 
good luck trying to figure out what bits you need from this. That would 
probably make sense if you never run these out of devstack having all the 
services on same machine anyways.
 
 Mark.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

- Erno (jokke) Kuvaja

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Unifying configuration file

2014-06-17 Thread Kuvaja, Erno
I do not like this idea. As now we are on 5 different config files (+ policy 
and schema). One for each (API and Registry) would still be ok, but putting all 
together would just become messy.

If the *-paste.ini will be migrated to .conf files that would bring it down, 
but please do not try to mix reg and API configs together.

- Erno (jokke) Kuvaja

 -Original Message-
 From: Flavio Percoco [mailto:fla...@redhat.com]
 Sent: 17 June 2014 15:19
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [glance] Unifying configuration file
 
 On 17/06/14 15:59 +0200, Julien Danjou wrote:
 Hi guys,
 
 So I've started to look at the configuration file used by Glance and I
 want to switch to one configuration file only.
 I stumbled upon this blueprint:
 
   https://blueprints.launchpad.net/glance/+spec/use-oslo-config
 
 
 w.r.t using config.generator https://review.openstack.org/#/c/83327/
 
 which fits.
 
 Does not look like I can assign myself to it, but if someone can do so,
 go ahead.
 
 So I've started to work on that, and I got it working. My only problem
 right now, concerned the [paste_deploy] options that is provided by
 Glance. I'd like to remove this section altogether, as it's not
 possible to have it and have the same configuration file read by both
 glance-api and glance-registry.
 My idea is also to unify glance-api-paste.ini and
 glance-registry-paste.ini into glance-paste.ini and then have each
 server reads their default pipeline (pipeline:glance-api).
 
 Does that sounds reasonable to everyone?
 
 +1, it sounds like a good idea. I don't think we need to maintain 2
 separate config files, especially now that the registry service is optional.
 
 Thanks for working on this.
 Flavio
 
 --
 @flaper87
 Flavio Percoco
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >