Re: [openstack-dev] [magnum] Schedule for rest of Kilo

2015-02-02 Thread Tony Breeds
On Tue, Jan 27, 2015 at 04:27:21AM +, Adrian Otto wrote:
> Tony,
> 
> That would be terrific. Which iCal feed were you thinking of? I was planning 
> on making something similar to this:

Sorry Adrian I was a little off topic.

I was thinking that when you settle on a schedule for your regular team
meetings (and add them to [1])  I'll handle keeping them in sync with the
openstack iCal feed [2]

Yours Tony.

[1] https://wiki.openstack.org/wiki/Meetings
[2] 
https://www.google.com/calendar/embed?src=bj05mroquq28jhud58esggqmh4%40group.calendar.google.com&ctz=Iceland/Reykjavik


pgphAzSttZNfp.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler sub-group meeting agenda 2/3

2015-02-02 Thread Dugger, Donald D
Meeting on #openstack-meeting at 1500 UTC (8:00AM MST)


1)  Remove direct nova DB/API access by Scheduler Filters - 
https://review.opernstack.org/138444/

2)  Status on cleanup work - https://wiki.openstack.org/wiki/Gantt/kilo


--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] operators vs users for choosing convergence engine

2015-02-02 Thread Robert Collins
I think incremental adoption is a great principle to have and this
will enable that.

So +1

-Rob

On 3 February 2015 at 13:52, Steve Baker  wrote:
> A spec has been raised to add a config option to allow operators to choose
> whether to use the new convergence engine for stack operations. For some
> context you should read the spec first [1]
>
> Rather than doing this, I would like to propose the following:
> * Users can (optionally) choose which engine to use by specifying an engine
> parameter on stack-create (choice of classic or convergence)
> * Operators can set a config option which determines which engine to use if
> the user makes no explicit choice
> * Heat developers will set the default config option from classic to
> convergence when convergence is deemed sufficiently mature
>
> I realize it is not ideal to expose this kind of internal implementation
> detail to the user, but choosing convergence _will_ result in different
> stack behaviour (such as multiple concurrent update operations) so there is
> an argument for giving the user the choice. Given enough supporting
> documentation they can choose whether convergence might be worth trying for
> a given stack (for example, a large stack which receives frequent updates)
>
> Operators likely won't feel they have enough knowledge to make the call that
> a heat install should be switched to using all convergence, and users will
> never be able to try it until the operators do (or the default switches).
>
> Finally, there are also some benefits to heat developers. Creating a whole
> new gate job to test convergence-enabled heat will consume its share of CI
> resource. I'm hoping to make it possible for some of our functional tests to
> run against a number of scenarios/environments. Being able to run tests
> under classic and convergence scenarios in one test run will be a great help
> (for performance profiling too).
>
> If there is enough agreement then I'm fine with taking over and updating the
> convergence-config-option spec.
>
> [1]
> https://review.openstack.org/#/c/152301/2/specs/kilo/convergence-config-option.rst
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Talk on Jinja Metatemplates for upcoming summit

2015-02-02 Thread Pratik Mallya
Hey Pavlov,

The main aim of this effort is to allow a more efficient template catalog 
management, not unlike what is given in [2]. As a service to our customers, 
Rackspace maintains a catalog of useful templates[3] which are also exposed to 
the user through the UI. The template authors of these templates had expressed 
difficulties in having to maintain several templates depending on resource 
availability, account-type etc., so they asked for the ability to use Jinja 
templating system to instead include everything in one Heat "meta-template" 
(Heat Template + Jinja, I’m not sure if that term is used for something else 
already :-) ). e.g. [4] shows a very simple case of having to choose between 
two templates depending upon the availability of Neutron on the network.

I hope that clarifies things a bit. Let me know if you have more questions!

Thanks!
-Pratik

[3] https://github.com/rackspace-orchestration-templates
[4] 
https://github.com/rackspace-orchestration-templates/jinja-test/blob/master/jinja-test.yaml
On Feb 2, 2015, at 1:44 PM, Pavlo Shchelokovskyy 
mailto:pshchelokovs...@mirantis.com>> wrote:

Hi Pratik,

what would be the aim for this templating? I ask since we in Heat try to keep 
the imperative logic like e.g. if-else out of heat templates, leaving it to 
other services. Plus there is already a spec for a heat template function to 
repeat pieces of template structure [1].

I can definitely say that some other OpenStack projects that are consumers of 
Heat will be interested - Trove already tries to use Jinja templates to create 
Heat templates [2], and possibly Sahara and Murano might be interested as well 
(I suspect though the latter already uses YAQL for that).

[1] https://review.openstack.org/#/c/140849/
[2] 
https://github.com/openstack/trove/blob/master/trove/templates/default.heat.template

Best regards,

Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com

On Mon, Feb 2, 2015 at 8:29 PM, Pratik Mallya 
mailto:pratik.mal...@rackspace.com>> wrote:
Hello Heat Developers,

As part of an internal development project at Rackspace, I implemented a 
mechanism to allow using Jinja templating system in heat templates. I was 
hoping to give a talk on the same for the upcoming summit (which will be the 
first summit after I started working on openstack). Have any of you worked/ are 
working on something similar? If so, could you please contact me and we can 
maybe propose a joint talk? :-)

Please let me know! It’s been interesting work and I hope the community will be 
excited to see it.

Thanks!
-Pratik

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove]how to enable trove in dashboard?

2015-02-02 Thread Li Tianqing
Sorry, i find it.


--

Best
Li Tianqing

At 2015-02-03 10:48:06, "Li Tianqing"  wrote:

Hello,
   I first install devstack, then install trove from source code. After 
seraching on the net, i do not find how to enable trove in dashboard...
  Can someone help to point out how to?




--

Best
Li Tianqing


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove]how to enable trove in dashboard?

2015-02-02 Thread Li Tianqing
Hello,
   I first install devstack, then install trove from source code. After 
seraching on the net, i do not find how to enable trove in dashboard...
  Can someone help to point out how to?




--

Best
Li Tianqing__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Everett Toews
On Feb 2, 2015, at 7:24 PM, Sean Dague mailto:s...@dague.net>> 
wrote:

On 02/02/2015 05:35 PM, Jay Pipes wrote:
On 01/29/2015 12:41 PM, Sean Dague wrote:
Correct. This actually came up at the Nova mid cycle in a side
conversation with Ironic and Neutron folks.

HTTP error codes are not sufficiently granular to describe what happens
when a REST service goes wrong, especially if it goes wrong in a way
that would let the client do something other than blindly try the same
request, or fail.

Having a standard json error payload would be really nice.

{
 fault: ComputeFeatureUnsupportedOnInstanceType,
 messsage: "This compute feature is not supported on this kind of
instance type. If you need this feature please use a different instance
type. See your cloud provider for options."
}

That would let us surface more specific errors.


Standardization here from the API WG would be really great.

What about having a separate HTTP header that indicates the "OpenStack
Error Code", along with a generated URI for finding more information
about the error?

Something like:

X-OpenStack-Error-Code: 1234
X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234

That way is completely backwards compatible (since we wouldn't be
changing response payloads) and we could handle i18n entirely via the
HTTP help service running on errors.openstack.org.

That could definitely be implemented in the short term, but if we're
talking about API WG long term evolution, I'm not sure why a standard
error payload body wouldn't be better.

Agreed. And using the “X-“ prefix in headers has been deprecated for over 2 
years now [1]. I don’t think we should be using it for new things.

Everett

[1] https://tools.ietf.org/html/rfc6648

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Qin Zhao
Agree with Sean. A short error name in response body would be better for
applications who consume OpenStack. To my understand, the
X-OpenStack-Error-Help-URI proposed by jpipes will be a uri to error
resolution method. Usually, a consumer application needn't to load its
content.
On Feb 3, 2015 9:28 AM, "Sean Dague"  wrote:

> On 02/02/2015 05:35 PM, Jay Pipes wrote:
> > On 01/29/2015 12:41 PM, Sean Dague wrote:
> >> Correct. This actually came up at the Nova mid cycle in a side
> >> conversation with Ironic and Neutron folks.
> >>
> >> HTTP error codes are not sufficiently granular to describe what happens
> >> when a REST service goes wrong, especially if it goes wrong in a way
> >> that would let the client do something other than blindly try the same
> >> request, or fail.
> >>
> >> Having a standard json error payload would be really nice.
> >>
> >> {
> >>   fault: ComputeFeatureUnsupportedOnInstanceType,
> >>   messsage: "This compute feature is not supported on this kind of
> >> instance type. If you need this feature please use a different instance
> >> type. See your cloud provider for options."
> >> }
> >>
> >> That would let us surface more specific errors.
> > 
> >>
> >> Standardization here from the API WG would be really great.
> >
> > What about having a separate HTTP header that indicates the "OpenStack
> > Error Code", along with a generated URI for finding more information
> > about the error?
> >
> > Something like:
> >
> > X-OpenStack-Error-Code: 1234
> > X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234
> >
> > That way is completely backwards compatible (since we wouldn't be
> > changing response payloads) and we could handle i18n entirely via the
> > HTTP help service running on errors.openstack.org.
>
> That could definitely be implemented in the short term, but if we're
> talking about API WG long term evolution, I'm not sure why a standard
> error payload body wouldn't be better.
>
> The if we are going to having global codes that are just numbers, we'll
> also need a global naming registry. Which isn't a bad thing, just
> someone will need to allocate the numbers in a separate global repo
> across all projects.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Brant Knudson
On Mon, Feb 2, 2015 at 4:35 PM, Jay Pipes  wrote:

> On 01/29/2015 12:41 PM, Sean Dague wrote:
>
>> Correct. This actually came up at the Nova mid cycle in a side
>> conversation with Ironic and Neutron folks.
>>
>> HTTP error codes are not sufficiently granular to describe what happens
>> when a REST service goes wrong, especially if it goes wrong in a way
>> that would let the client do something other than blindly try the same
>> request, or fail.
>>
>> Having a standard json error payload would be really nice.
>>
>> {
>>   fault: ComputeFeatureUnsupportedOnInstanceType,
>>   messsage: "This compute feature is not supported on this kind of
>> instance type. If you need this feature please use a different instance
>> type. See your cloud provider for options."
>> }
>>
>> That would let us surface more specific errors.
>>
> 
>
>>
>> Standardization here from the API WG would be really great.
>>
>
> What about having a separate HTTP header that indicates the "OpenStack
> Error Code", along with a generated URI for finding more information about
> the error?
>
> Something like:
>
> X-OpenStack-Error-Code: 1234
> X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234
>
> That way is completely backwards compatible (since we wouldn't be changing
> response payloads) and we could handle i18n entirely via the HTTP help
> service running on errors.openstack.org.
>
>
Some of the suggested formats for an error document allow for multiple
errors, which would be useful in an input validation case since there may
be multiple fields that are incorrect (missing or wrong format).

One option to keep backwards compatibility is have both formats in the same
object. Keystone currently returns an error document like:

$ curl -X DELETE -H "X-auth-token: $TOKEN"
http://localhost:5000/v3/groups/lkdsajlkdsa/users/lkajfdskdsajf
{"error": {"message": "Could not find user: lkajfdskdsajf", "code": 404,
"title": "Not Found"}}

So an enhanced error document could have:

$ curl -X DELETE -H "X-auth-token: $TOKEN"
http://localhost:5000/v3/groups/lkdsajlkdsa/users/lkajfdskdsajf
{"error": {"message": "Could not find user: lkajfdskdsajf", "code": 404,
"title": "Not Found"},
 "errors": [ { "message": "Could not find group: lkdsajlkdsa", "id":
"groupNotFound" },
 { "message": "Could not find user: lkajfdskdsajf", "id":
"userNotFound" } ]
}

Then when identity API 4 comes out we drop the deprecated "error" field.

- Brant



> Best,
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiple template libraries being used in tree

2015-02-02 Thread Mike Bayer


Sean Dague  wrote:

> On 02/02/2015 06:06 PM, Mike Bayer wrote:
>> Sean Dague  wrote:
>> 
>>> On 02/02/2015 04:20 PM, Mark McClain wrote:
 You’re right that the Mako dependency is really a side effect from 
 Alembic.  We used jinja for tempting radvd because it is used by the 
 projects within the OpenStack ecosystem and also used in VPNaaS.
>>> 
>>> Jinja is far more used in other parts of OpenStack from my recollection,
>>> I think that's probably the prefered thing to consolidate on.
>>> 
>>> Alembic being different is fine, it's a dependent library.
>> 
>> 
>> there’s no reason not to have both installed. Tempita also gets 
>> installed with a typical openstack setup.
>> 
>> that said, if you use Mako, you get the creator of Mako on board to help as 
>> he already works for openstack, for free!
> 
> Sure, but the point is that it would be better to have the OpenStack
> code be consistent in this regard, as it makes for a more smooth
> environment.

stick with Jinja if that’s what projects are already using.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Sean Dague
On 02/02/2015 05:35 PM, Jay Pipes wrote:
> On 01/29/2015 12:41 PM, Sean Dague wrote:
>> Correct. This actually came up at the Nova mid cycle in a side
>> conversation with Ironic and Neutron folks.
>>
>> HTTP error codes are not sufficiently granular to describe what happens
>> when a REST service goes wrong, especially if it goes wrong in a way
>> that would let the client do something other than blindly try the same
>> request, or fail.
>>
>> Having a standard json error payload would be really nice.
>>
>> {
>>   fault: ComputeFeatureUnsupportedOnInstanceType,
>>   messsage: "This compute feature is not supported on this kind of
>> instance type. If you need this feature please use a different instance
>> type. See your cloud provider for options."
>> }
>>
>> That would let us surface more specific errors.
> 
>>
>> Standardization here from the API WG would be really great.
> 
> What about having a separate HTTP header that indicates the "OpenStack
> Error Code", along with a generated URI for finding more information
> about the error?
> 
> Something like:
> 
> X-OpenStack-Error-Code: 1234
> X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234
> 
> That way is completely backwards compatible (since we wouldn't be
> changing response payloads) and we could handle i18n entirely via the
> HTTP help service running on errors.openstack.org.

That could definitely be implemented in the short term, but if we're
talking about API WG long term evolution, I'm not sure why a standard
error payload body wouldn't be better.

The if we are going to having global codes that are just numbers, we'll
also need a global naming registry. Which isn't a bad thing, just
someone will need to allocate the numbers in a separate global repo
across all projects.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiple template libraries being used in tree

2015-02-02 Thread Sean Dague
On 02/02/2015 06:06 PM, Mike Bayer wrote:
> 
> 
> Sean Dague  wrote:
> 
>> On 02/02/2015 04:20 PM, Mark McClain wrote:
>>> You’re right that the Mako dependency is really a side effect from Alembic. 
>>>  We used jinja for tempting radvd because it is used by the projects within 
>>> the OpenStack ecosystem and also used in VPNaaS.
>>
>> Jinja is far more used in other parts of OpenStack from my recollection,
>> I think that's probably the prefered thing to consolidate on.
>>
>> Alembic being different is fine, it's a dependent library.
> 
> 
> there’s no reason not to have both installed. Tempita also gets installed 
> with a typical openstack setup.
> 
> that said, if you use Mako, you get the creator of Mako on board to help as 
> he already works for openstack, for free!

Sure, but the point is that it would be better to have the OpenStack
code be consistent in this regard, as it makes for a more smooth
environment.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] unable to reproduce bug 1317363‏

2015-02-02 Thread bharath thiruveedula
Yeah sure

From: blak...@gmail.com
Date: Mon, 2 Feb 2015 11:09:08 -0800
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev][Neutron] unable to reproduce bug 1317363‏

The mailing list isn't a great place to discuss reproducing a bug. Post this 
comment on the bug report instead of the mailing list. That way the person who 
reported it and the ones who triaged it can see this information and respond. 
They might not be watching the dev mailing list as closely.


On Mon, Feb 2, 2015 at 10:17 AM, bharath thiruveedula  
wrote:



Hi,
I am Bharath Thiruveedula. I am new to openstack neutron and networking. I am 
trying to solve the bug 1317363. But I am unable to reproduce that bug. The 
steps I have done to reproduce:
1)I have created with network with external = True2)Created a subnet for the 
above network with CIDR=172.24.4.0/24 with gateway-ip =172.24.4.53)Created the 
router4)Set the gateway interface to the router5)Tried to change subnet 
gateway-ip but got this error"Gateway ip 172.24.4.7 conflicts with allocation 
pool 172.24.4.6-172.24.4.254"I used this command for that"neutron subnet-update 
ff9fe828-9ca2-42c4-9997-3743d8fc0b0c --gateway-ip 172.24.4.7" 
Can you please help me with this issue?

-- Bharath Thiruveedula   

__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Christopher Yeoh
On Tue, Feb 3, 2015 at 9:05 AM, Jay Pipes  wrote:

> On 01/29/2015 12:41 PM, Sean Dague wrote:
>
>> Correct. This actually came up at the Nova mid cycle in a side
>> conversation with Ironic and Neutron folks.
>>
>> HTTP error codes are not sufficiently granular to describe what happens
>> when a REST service goes wrong, especially if it goes wrong in a way
>> that would let the client do something other than blindly try the same
>> request, or fail.
>>
>> Having a standard json error payload would be really nice.
>>
>> {
>>   fault: ComputeFeatureUnsupportedOnInstanceType,
>>   messsage: "This compute feature is not supported on this kind of
>> instance type. If you need this feature please use a different instance
>> type. See your cloud provider for options."
>> }
>>
>> That would let us surface more specific errors.
>>
> 
>
>>
>> Standardization here from the API WG would be really great.
>>
>
> What about having a separate HTTP header that indicates the "OpenStack
> Error Code", along with a generated URI for finding more information about
> the error?
>
> Something like:
>
> X-OpenStack-Error-Code: 1234
> X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234
>
> That way is completely backwards compatible (since we wouldn't be changing
> response payloads) and we could handle i18n entirely via the HTTP help
> service running on errors.openstack.org.
>
>
So I'm +1 to adding the X-OpenStack-Error-Code header assuming the error
code is unique
across OpenStack APIs and it has a fixed meaning (we never change it,
create a new one if
a project has a need for an error code which is close to the original one
but a bit different)

The X-OpenStack-Error-Help-URI header I'm not sure about. We can't
guarantee that apps will have
access to errors.openstack.org - is there an assumption here that we'd
build/ship an error translation service?

Regards,

Chris
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] operators vs users for choosing convergence engine

2015-02-02 Thread Steve Baker
A spec has been raised to add a config option to allow operators to 
choose whether to use the new convergence engine for stack operations. 
For some context you should read the spec first [1]


Rather than doing this, I would like to propose the following:
* Users can (optionally) choose which engine to use by specifying an 
engine parameter on stack-create (choice of classic or convergence)
* Operators can set a config option which determines which engine to use 
if the user makes no explicit choice
* Heat developers will set the default config option from classic to 
convergence when convergence is deemed sufficiently mature


I realize it is not ideal to expose this kind of internal implementation 
detail to the user, but choosing convergence _will_ result in different 
stack behaviour (such as multiple concurrent update operations) so there 
is an argument for giving the user the choice. Given enough supporting 
documentation they can choose whether convergence might be worth trying 
for a given stack (for example, a large stack which receives frequent 
updates)


Operators likely won't feel they have enough knowledge to make the call 
that a heat install should be switched to using all convergence, and 
users will never be able to try it until the operators do (or the 
default switches).


Finally, there are also some benefits to heat developers. Creating a 
whole new gate job to test convergence-enabled heat will consume its 
share of CI resource. I'm hoping to make it possible for some of our 
functional tests to run against a number of scenarios/environments. 
Being able to run tests under classic and convergence scenarios in one 
test run will be a great help (for performance profiling too).


If there is enough agreement then I'm fine with taking over and updating 
the convergence-config-option spec.


[1] 
https://review.openstack.org/#/c/152301/2/specs/kilo/convergence-config-option.rst


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-operators] UpgradeImpact: Replacing swift_enable_net with swift_store_endpoint

2015-02-02 Thread Jesse Cook
+ openstack-operators

On 2/2/15, 12:24 PM, "Jesse Cook" 
mailto:jesse.c...@rackspace.com>> wrote:

Configuration options will change (https://review.openstack.org/#/c/146972/4):

- Removed config option: "swift_enable_snet". The default value of
  "swift_enable_snet" was False [1]. The comments indicated not to change this
  default value unless you are Rackspace [2].

- Added config option "swift_store_endpoint". The default value of
  "swift_store_endpoint" is None, in which case the storage url from the auth
  response will be used. If set, the configured endpoint will be used. Example
  values: "swift_store_endpoint" = "https://www.example.com/v1/not_a_container";

1. 
https://github.com/openstack/glance/blob/fd5a55c7f386a9d9441d5f1291ff6a92f7e6cc1b/etc/glance-api.conf#L525
2. 
https://github.com/openstack/glance/blob/fd5a55c7f386a9d9441d5f1291ff6a92f7e6cc1b/etc/glance-api.conf#L520

If you are using "swift_enable_snet" (i.e. You changed the default config from 
False to True in your deployment) and you are not Rackspace, please respond to 
this thread. Note, this is very unlikely as it is a Rackspace only option and 
documented as such.

Thanks,

Jesse
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TaskFlow 0.7.0 released

2015-02-02 Thread Joshua Harlow

Thanks for that!

Much appreciated :-)

Joe Gordon wrote:

This broke grenade on stable/juno, here is the fix.

https://review.openstack.org/#/c/152333/

On Mon, Feb 2, 2015 at 10:56 AM, Joshua Harlow mailto:harlo...@outlook.com>> wrote:

The Oslo team is pleased to announce the release of:

TaskFlow 0.7.0: taskflow structured state management library.

For more details, please see the git log history below and:

http://launchpad.net/taskflow/__+milestone/0.7.0


Please report issues through launchpad:

http://bugs.launchpad.net/__taskflow/


Noteable changes


* Using non-deprecated oslo.utils and oslo.serialization imports.
* Added note(s) about publicly consumable types into docs.
* Increase robustness of WBE producer/consumers by supporting and using
   the kombu provided feature to retry/ensure on transient/recoverable
   failures (such as timeouts).
* Move the jobboard/job bases to a jobboard/base module and
   move the persistence base to the parent directory (standardizes how
   all pluggable types now have a similiar base module in a similar
location,
   making the layout of taskflow's codebase easier to
understand/follow).
* Add executor statistics, using taskflow.futures executors now
provides a
   useful feature to know about the following when using these
executors.
   --
   | Statistic | What it is |


--__--__-
   | failures  | How many submissions ended up raising exceptions
   |
   | executed  | How many submissions were executed (failed or not)
   |
   | runtime   | Total runtime of all submissions executed (failed
or not) |
   | cancelled | How many submissions were cancelled before
executing  |


--__--__-
* The taskflow logger module does not provide a logging adapter [bug]
* Use monotonic time when/if available for stopwatches (py3.3+ natively
   supports this) and other time.time usage (where the usage of
time.time only
   cares about the duration between two points in time).
* Make all/most usage of type errors follow a similar pattern (exception
   cleanup).

Changes in /homes/harlowja/dev/os/__taskflow 0.6.1..0.7.0
--__-

NOTE: Skipping requirement commits...

19f9674 Abstract out the worker finding from the WBE engine
99b92ae Add and use a nicer kombu message formatter
df6fb03 Remove duplicated 'do' in types documentation
43d70eb Use the class defined constant instead of raw strings
344b3f6 Use kombu socket.timeout alias instead of socket.timeout
d5128cf Stopwatch usage cleanup/tweak
2e43b67 Add note about publicly consumable types
e9226ca Add docstring to wbe proxy to denote not for public use
80888c6 Use monotonic time when/if available
7fe2945 Link WBE docs together better (especially around arguments)
f3a1dcb Emit a warning when no routing keys provided on publish()
802bce9 Center SVG state diagrams
97797ab Use importutils.try_import for optional eventlet imports
84d44fa Shrink the WBE request transition SVG image size
ca82e20 Add a thread bundle helper utility + tests
e417914 Make all/most usage of type errors follow a similar pattern
2f04395 Leave use-cases out of WBE developer documentation
e3e2950 Allow just specifying 'workers' for WBE entrypoint
66fc2df Add comments to runner state machine reaction functions
35745c9 Fix coverage environment
fc9cb88 Use explicit WBE worker object arguments (instead of kwargs)
0672467 WBE documentation tweaks/adjustments
55ad11f Add a WBE request state diagram + explanation
45ef595 Tidy up the WBE cache (now WBE types) module
1469552 Fix leftover/remaining 'oslo.utils' usage
93d73b8 Show the failure discarded (and the future intention)
5773fb0 Use a class provided logger before falling back to module
addc286 Use explicit WBE object arguments (instead of kwargs)
342c59e Fix persistence doc inheritance hierarchy
072210a The gathered runtime is for failures/not failures
410efa7 add clarification re parallel engine
cb27080 Increase robustness of WBE producer/consumers
bb38457 Move implementation(s) to there own sections
f14ee9e Move the jobboard/job bases to a jobboard/base module
ac5345e Have the serial task executor shutdown/restart its executor
426484f Mirror the task executor methods in the retry action
d92c226 Add back a 'eventlet_utils' helper utility module
1ed0f22 Use constants for runner state machine event names
bfc1136 Remove 'SaveOrderTask' and test state in class variables

[openstack-dev] UpgradeImpact: Replacing swift_enable_net with swift_store_endpoint

2015-02-02 Thread Jesse Cook
+openstack-operators

On 2/2/15, 12:24 PM, "Jesse Cook" 
mailto:jesse.c...@rackspace.com>> wrote:

Configuration options will change (https://review.openstack.org/#/c/146972/4):

- Removed config option: "swift_enable_snet". The default value of
  "swift_enable_snet" was False [1]. The comments indicated not to change this
  default value unless you are Rackspace [2].

- Added config option "swift_store_endpoint". The default value of
  "swift_store_endpoint" is None, in which case the storage url from the auth
  response will be used. If set, the configured endpoint will be used. Example
  values: "swift_store_endpoint" = "https://www.example.com/v1/not_a_container";

1. 
https://github.com/openstack/glance/blob/fd5a55c7f386a9d9441d5f1291ff6a92f7e6cc1b/etc/glance-api.conf#L525
2. 
https://github.com/openstack/glance/blob/fd5a55c7f386a9d9441d5f1291ff6a92f7e6cc1b/etc/glance-api.conf#L520

If you are using "swift_enable_snet" (i.e. You changed the default config from 
False to True in your deployment) and you are not Rackspace, please respond to 
this thread. Note, this is very unlikely as it is a Rackspace only option and 
documented as such.

Thanks,

Jesse
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TaskFlow 0.7.0 released

2015-02-02 Thread Joe Gordon
This broke grenade on stable/juno, here is the fix.

https://review.openstack.org/#/c/152333/

On Mon, Feb 2, 2015 at 10:56 AM, Joshua Harlow  wrote:

> The Oslo team is pleased to announce the release of:
>
> TaskFlow 0.7.0: taskflow structured state management library.
>
> For more details, please see the git log history below and:
>
> http://launchpad.net/taskflow/+milestone/0.7.0
>
> Please report issues through launchpad:
>
> http://bugs.launchpad.net/taskflow/
>
> Noteable changes
> 
>
> * Using non-deprecated oslo.utils and oslo.serialization imports.
> * Added note(s) about publicly consumable types into docs.
> * Increase robustness of WBE producer/consumers by supporting and using
>   the kombu provided feature to retry/ensure on transient/recoverable
>   failures (such as timeouts).
> * Move the jobboard/job bases to a jobboard/base module and
>   move the persistence base to the parent directory (standardizes how
>   all pluggable types now have a similiar base module in a similar
> location,
>   making the layout of taskflow's codebase easier to understand/follow).
> * Add executor statistics, using taskflow.futures executors now provides a
>   useful feature to know about the following when using these executors.
>   --
>   | Statistic | What it is |
>   
> -
>   | failures  | How many submissions ended up raising exceptions  |
>   | executed  | How many submissions were executed (failed or not)|
>   | runtime   | Total runtime of all submissions executed (failed or not) |
>   | cancelled | How many submissions were cancelled before executing  |
>   
> -
> * The taskflow logger module does not provide a logging adapter [bug]
> * Use monotonic time when/if available for stopwatches (py3.3+ natively
>   supports this) and other time.time usage (where the usage of time.time
> only
>   cares about the duration between two points in time).
> * Make all/most usage of type errors follow a similar pattern (exception
>   cleanup).
>
> Changes in /homes/harlowja/dev/os/taskflow 0.6.1..0.7.0
> ---
>
> NOTE: Skipping requirement commits...
>
> 19f9674 Abstract out the worker finding from the WBE engine
> 99b92ae Add and use a nicer kombu message formatter
> df6fb03 Remove duplicated 'do' in types documentation
> 43d70eb Use the class defined constant instead of raw strings
> 344b3f6 Use kombu socket.timeout alias instead of socket.timeout
> d5128cf Stopwatch usage cleanup/tweak
> 2e43b67 Add note about publicly consumable types
> e9226ca Add docstring to wbe proxy to denote not for public use
> 80888c6 Use monotonic time when/if available
> 7fe2945 Link WBE docs together better (especially around arguments)
> f3a1dcb Emit a warning when no routing keys provided on publish()
> 802bce9 Center SVG state diagrams
> 97797ab Use importutils.try_import for optional eventlet imports
> 84d44fa Shrink the WBE request transition SVG image size
> ca82e20 Add a thread bundle helper utility + tests
> e417914 Make all/most usage of type errors follow a similar pattern
> 2f04395 Leave use-cases out of WBE developer documentation
> e3e2950 Allow just specifying 'workers' for WBE entrypoint
> 66fc2df Add comments to runner state machine reaction functions
> 35745c9 Fix coverage environment
> fc9cb88 Use explicit WBE worker object arguments (instead of kwargs)
> 0672467 WBE documentation tweaks/adjustments
> 55ad11f Add a WBE request state diagram + explanation
> 45ef595 Tidy up the WBE cache (now WBE types) module
> 1469552 Fix leftover/remaining 'oslo.utils' usage
> 93d73b8 Show the failure discarded (and the future intention)
> 5773fb0 Use a class provided logger before falling back to module
> addc286 Use explicit WBE object arguments (instead of kwargs)
> 342c59e Fix persistence doc inheritance hierarchy
> 072210a The gathered runtime is for failures/not failures
> 410efa7 add clarification re parallel engine
> cb27080 Increase robustness of WBE producer/consumers
> bb38457 Move implementation(s) to there own sections
> f14ee9e Move the jobboard/job bases to a jobboard/base module
> ac5345e Have the serial task executor shutdown/restart its executor
> 426484f Mirror the task executor methods in the retry action
> d92c226 Add back a 'eventlet_utils' helper utility module
> 1ed0f22 Use constants for runner state machine event names
> bfc1136 Remove 'SaveOrderTask' and test state in class variables
> 22eef96 Provide the stopwatch elapsed method a maximum
> 3968508 Fix unused and conflicting variables
> 2280f9a Switch to using 'oslo_serialization' vs 'oslo.serialization'
> d748db9 Switch to using 'oslo_utils' vs 'oslo.utils'
> 9c15eff Add executor statistics
> bf2f205 Use oslo.utils reflection for class name
> 9fe99ba Add split time capturing to the stop watch
> 42a665d 

Re: [openstack-dev] [oslo.db][nova] Use of asynchronous slaves in Nova (was: Deprecating use_slave in Nova)

2015-02-02 Thread Mike Bayer


Matthew Booth  wrote:

> 
> Based on my current (and still sketchy) understanding, I think we can
> define 3 classes of database node:
> 
> 1. Read/write
> 2. Synchronous read-only
> 3. Asynchronous read-only
> 
> and 3 code annotations:
> 
> * Writer (must use class 1)
> * Reader (prefer class 2, can use 1)
> * Async reader (prefer class 3, can use 2 or 1)
> 
> The use cases for async would presumably be limited. Perhaps certain
> periodic tasks? Would it even be worth it?

Let’s suppose someone runs an openstack setup using a database with async 
replication.

Can openstack even make use of this outside of these periodic tasks, or is it 
the case that a stateless call to openstack (e.g. a web service call) can’t be 
tasked with knowing when it relies upon a previous web service call that may 
not have been synced?

Let’s suppose that an app has a web service call, and within that scope, it 
calls a function that does @writer, and then it calls a function that does 
@reader.   Even that situation, enginefacade could detect that within the new 
@reader call, we see a context being passed that we know was just used in a 
@writer - so even then, we could have the @reader upgrade to @writer if we know 
that reader slaves are async in a certain configuration.

But is that enough?   Or is it the case that a common operation calls upon 
multiple web service calls that are dependent on each other, with no indication 
between them to detect this, therefore all of these calls have to assume “I can 
only read from a slave if its synchronous” ?

I think we really need to know what deployment styles we are targeting here.  
If most people use galera synchronous, that can be the primary platform, and 
the others simply won’t be able to promise very good utilization of async read 
slaves.

If that all makes sense.  If I read this a week from now I won’t understand 
what I’m talking about.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiple template libraries being used in tree

2015-02-02 Thread Mike Bayer


Sean Dague  wrote:

> On 02/02/2015 04:20 PM, Mark McClain wrote:
>> You’re right that the Mako dependency is really a side effect from Alembic.  
>> We used jinja for tempting radvd because it is used by the projects within 
>> the OpenStack ecosystem and also used in VPNaaS.
> 
> Jinja is far more used in other parts of OpenStack from my recollection,
> I think that's probably the prefered thing to consolidate on.
> 
> Alembic being different is fine, it's a dependent library.


there’s no reason not to have both installed. Tempita also gets installed 
with a typical openstack setup.

that said, if you use Mako, you get the creator of Mako on board to help as he 
already works for openstack, for free!




> 
>   -Sean
> 
>> mark
>> 
>> 
>>> On Feb 2, 2015, at 3:13 PM, Sean M. Collins  wrote:
>>> 
>>> Sorry, I should have done a bit more grepping before I sent the e-mail,
>>> since it appears that Mako is being used by alembic.
>>> 
>>> http://alembic.readthedocs.org/en/latest/tutorial.html
>>> 
>>> So, should we switch the radvd templating over to Mako instead?
>>> 
>>> -- 
>>> Sean M. Collins
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence Phase 1 implementation plan

2015-02-02 Thread Zane Bitter

On 26/01/15 19:04, Angus Salkeld wrote:

On Sat, Jan 24, 2015 at 7:00 AM, Zane Bitter mailto:zbit...@redhat.com>> wrote:
I'm also prepared to propose specs for all of these _if_ people
think that would be helpful. I see three options here:
  - Propose 18 fairly minimal specs (maybe in a single review?)


This sounds fine, but if possible group them a bit 18 sounds like a lot
and many of these look like small jobs.
I am also open to using bugs for smaller items. Basically this is just
the red tape, so what ever is the least effort
and makes things easier to divide the work up.


OK, here are the specs:

https://review.openstack.org/#/q/status:open+project:openstack/heat-specs+branch:master+topic:convergence,n,z

Let's get reviewing (and implementing!) :)

cheers,
Zane.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PowerKVM CI reporting

2015-02-02 Thread Kurt Taylor
On Mon, Feb 2, 2015 at 4:07 PM, Matt Riedemann 
wrote:

>
>
> On 2/2/2015 3:52 PM, Kurt Taylor wrote:
>
>> Thanks Morgan, That's why I wanted to email. We will gladly come to a
>> meeting and formally request to comment and will turn off commenting on
>> Keystone until then.
>>
>> Thanks,
>> Kurt Taylor (krtaylor)
>>
>> On Mon, Feb 2, 2015 at 3:43 PM, Morgan Fainberg
>> mailto:morgan.fainb...@gmail.com>> wrote:
>>
>> I assumed [my mistake] this was not commenting/reporting, just
>> running against Keystone. I expect a more specific request to
>> comment rather than a “hey we’re doing this” if commenting is what
>> is desired.
>>
>> Please come to our weekly meeting if you’re planning on
>> commenting/scoring on keystone patches.
>>
>> --
>> Morgan Fainberg
>>
>> On February 2, 2015 at 1:41:08 PM, Anita Kuno (ante...@anteaya.info
>> ) wrote:
>>
>>  On 02/02/2015 02:16 PM, Morgan Fainberg wrote:
>>> > Thank you for the heads up.
>>> >
>>> > —Morgan
>>> >
>>> > --
>>> > Morgan Fainberg
>>> >
>>> > On February 2, 2015 at 1:15:49 PM, Kurt Taylor (
>>> kurt.r.tay...@gmail.com ) wrote:
>>> >
>>> > Just FYI, in case there was any questions,
>>> >
>>> > In addition to testing and reporting on Nova, the IBM PowerKVM CI
>>> system is now also testing against Keystone patches.
>>> >
>>> > We are happy to also be testing keystone patches on PowerKVM, and
>>> will be adding other projects soon.
>>> >
>>> > Regards,
>>> > Kurt Taylor (krtaylor)
>>>
>>>
> Sorry for being naive, but what in Keystone is arch-specific such that it
> could be different on ppc64 vs x86_64?  Or is there more to PowerKVM CI
> than the name implies?
>
>
No, it's a good question. We plan on testing many different repos or
components in L1 and beyond. It is a quality statement really, to assure
anyone wanting to run OpenStack on a different platform that some set of
tests against some set of core components had been run.

We were starting with the L1 components with Nova first (as you know) and
adding from there. However, I of all people should know better than to turn
on comments for this new component without discussing it at the component's
meeting. I'm on the agenda for Keystone, please feel free to attend and
discuss.  https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting

Thanks,
Kurt Taylor (krtaylor)

-- 
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][api] How to handle API changes in contrib/*.py

2015-02-02 Thread Claudiu Belu
Hello!

There have been some discussion on what nova-api should return after a change 
in the API itself.

So, the change that generated this discussion is an API change to 2.2 is:
https://review.openstack.org/#/c/140313/23

- **2.2**

  Added Keypair type.

  A user can request the creation of a certain 'type' of keypair (ssh or x509).

  If no keypair type is specified, then the default 'ssh' type of keypair is
  created.

Currently, this change was done on  plugins/v3/keypairs.py, so now, the 2.2 
version will also return the keypair type on keypair-list, keypair-show, 
keypair-create.

Microversioning was used, so this behaviour is valid only if the user requests 
the 2.2 version. Version 2.1 will not accept keypair type as argument, nor will 
return the keypair type.

Now, the main problem is contrib/keypairs.py, microversioning cannot be applied 
there. The current commit filters the keypair type, it won't be returned. But 
there have been reviews stating that returning the keypair type is a 
"back-compatible change". Before this, there was a review stating that keypair 
type should not be returned.

So, finally, my question is: how should the API change be handled in 
contrib/keypairs.py?

Best regards,

Claudiu Belu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Ryan Moats
Sigh... hit send too soon and forgot to sign...

+1 to that idea...

Ryan

Jay Pipes  wrote on 02/02/2015 04:35:36 PM:

>
> What about having a separate HTTP header that indicates the "OpenStack
> Error Code", along with a generated URI for finding more information
> about the error?
>
> Something like:
>
> X-OpenStack-Error-Code: 1234
> X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234
>
> That way is completely backwards compatible (since we wouldn't be
> changing response payloads) and we could handle i18n entirely via the
> HTTP help service running on errors.openstack.org.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Ryan Moats
+1 to that idea...

Jay Pipes  wrote on 02/02/2015 04:35:36 PM:

>
> What about having a separate HTTP header that indicates the "OpenStack
> Error Code", along with a generated URI for finding more information
> about the error?
>
> Something like:
>
> X-OpenStack-Error-Code: 1234
> X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234
>
> That way is completely backwards compatible (since we wouldn't be
> changing response payloads) and we could handle i18n entirely via the
> HTTP help service running on errors.openstack.org.
>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Jay Pipes

On 01/29/2015 12:41 PM, Sean Dague wrote:

Correct. This actually came up at the Nova mid cycle in a side
conversation with Ironic and Neutron folks.

HTTP error codes are not sufficiently granular to describe what happens
when a REST service goes wrong, especially if it goes wrong in a way
that would let the client do something other than blindly try the same
request, or fail.

Having a standard json error payload would be really nice.

{
  fault: ComputeFeatureUnsupportedOnInstanceType,
  messsage: "This compute feature is not supported on this kind of
instance type. If you need this feature please use a different instance
type. See your cloud provider for options."
}

That would let us surface more specific errors.




Standardization here from the API WG would be really great.


What about having a separate HTTP header that indicates the "OpenStack 
Error Code", along with a generated URI for finding more information 
about the error?


Something like:

X-OpenStack-Error-Code: 1234
X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234

That way is completely backwards compatible (since we wouldn't be 
changing response payloads) and we could handle i18n entirely via the 
HTTP help service running on errors.openstack.org.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Keystone] Native keystone resources in Heat

2015-02-02 Thread Zane Bitter

On 30/01/15 02:19, Thomas Spatzier wrote:

From: Zane Bitter 
To: openstack Development Mailing List



Date: 29/01/2015 17:47
Subject: [openstack-dev] [Heat][Keystone] Native keystone resources in

Heat


I got a question today about creating keystone users/roles/tenants in
Heat templates. We currently support creating users via the
AWS::IAM::User resource, but we don't have a native equivalent.

IIUC keystone now allows you to add users to a domain that is otherwise
backed by a read-only backend (i.e. LDAP). If this means that it's now
possible to configure a cloud so that one need not be an admin to create
users then I think it would be a really useful thing to expose in Heat.
Does anyone know if that's the case?

I think roles and tenants are likely to remain admin-only, but we have
precedent for including resources like that in /contrib... this seems
like it would be comparably useful.

Thoughts?


I am really not a keystone expert, so don't know what the security
implications would be, but I have heard the requirement or wish to be able
to create users, roles etc. from a template many times. I've talked to
people who want to explore this for onboarding use cases, e.g. for
onboarding of lines of business in a company, or for onboarding customers
in a public cloud case. They would like to be able to have templates that
lay out the overall structure for authentication stuff, and then
parameterize it for each onboarding process.
If this is something to be enabled, that would be interesting to explore.


Thanks for the input everyone. I raised a spec + blueprint here:

https://review.openstack.org/152309

I don't have any immediate plans to work on this, so if anybody wants to 
grab it they'd be more than welcome :)


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PowerKVM CI reporting

2015-02-02 Thread Matt Riedemann



On 2/2/2015 3:52 PM, Kurt Taylor wrote:

Thanks Morgan, That's why I wanted to email. We will gladly come to a
meeting and formally request to comment and will turn off commenting on
Keystone until then.

Thanks,
Kurt Taylor (krtaylor)

On Mon, Feb 2, 2015 at 3:43 PM, Morgan Fainberg
mailto:morgan.fainb...@gmail.com>> wrote:

I assumed [my mistake] this was not commenting/reporting, just
running against Keystone. I expect a more specific request to
comment rather than a “hey we’re doing this” if commenting is what
is desired.

Please come to our weekly meeting if you’re planning on
commenting/scoring on keystone patches.

--
Morgan Fainberg

On February 2, 2015 at 1:41:08 PM, Anita Kuno (ante...@anteaya.info
) wrote:


On 02/02/2015 02:16 PM, Morgan Fainberg wrote:
> Thank you for the heads up.
>
> —Morgan
>
> --
> Morgan Fainberg
>
> On February 2, 2015 at 1:15:49 PM, Kurt Taylor (kurt.r.tay...@gmail.com 
) wrote:
>
> Just FYI, in case there was any questions,
>
> In addition to testing and reporting on Nova, the IBM PowerKVM CI system 
is now also testing against Keystone patches.
>
> We are happy to also be testing keystone patches on PowerKVM, and will be 
adding other projects soon.
>
> Regards,
> Kurt Taylor (krtaylor)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Requesting permission to comment on a new repo is best done at the
weekly meeting of the project in question, not the mailing list.

Thanks,
Anita.

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Sorry for being naive, but what in Keystone is arch-specific such that 
it could be different on ppc64 vs x86_64?  Or is there more to PowerKVM 
CI than the name implies?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PowerKVM CI reporting

2015-02-02 Thread Anita Kuno
On 02/02/2015 02:52 PM, Kurt Taylor wrote:
> Thanks Morgan, That's why I wanted to email.
And since we have over 100 third party CI accounts this is why this sort
of conversation can take place in channel rather than the mail list.

Everyone can attend meetings: https://wiki.openstack.org/wiki/Meetings
Permission is not required. Show up at the specified time and day in the
irc channel and introduce yourself.

Thank you,
Anita.

> We will gladly come to a
> meeting and formally request to comment and will turn off commenting on
> Keystone until then.
> 
> Thanks,
> Kurt Taylor (krtaylor)
> 
> On Mon, Feb 2, 2015 at 3:43 PM, Morgan Fainberg 
> wrote:
> 
>> I assumed [my mistake] this was not commenting/reporting, just running
>> against Keystone. I expect a more specific request to comment rather than a
>> “hey we’re doing this” if commenting is what is desired.
>>
>> Please come to our weekly meeting if you’re planning on commenting/scoring
>> on keystone patches.
>>
>> --
>> Morgan Fainberg
>>
>> On February 2, 2015 at 1:41:08 PM, Anita Kuno (ante...@anteaya.info)
>> wrote:
>>
>> On 02/02/2015 02:16 PM, Morgan Fainberg wrote:
>>> Thank you for the heads up.
>>>
>>> —Morgan
>>>
>>> --
>>> Morgan Fainberg
>>>
>>> On February 2, 2015 at 1:15:49 PM, Kurt Taylor (kurt.r.tay...@gmail.com)
>> wrote:
>>>
>>> Just FYI, in case there was any questions,
>>>
>>> In addition to testing and reporting on Nova, the IBM PowerKVM CI system
>> is now also testing against Keystone patches.
>>>
>>> We are happy to also be testing keystone patches on PowerKVM, and will
>> be adding other projects soon.
>>>
>>> Regards,
>>> Kurt Taylor (krtaylor)
>>>
>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> Requesting permission to comment on a new repo is best done at the
>> weekly meeting of the project in question, not the mailing list.
>>
>> Thanks,
>> Anita.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-02-02 Thread Jesse Pretorius
On 2 February 2015 at 16:29, Sean Dague  wrote:

> It's really easy to say "someone should do this", but the problem is
> that none of the core team is interested, neither is anyone else. Most
> of the people that once were interested have left being active in
> OpenStack.
>
> EC2 compatibility does not appear to be part of the long term strategy
> for the project, hasn't been in a while (looking at the level of
> maintenance here). Ok, we should signal that so that new and existing
> users that believe that is a core supported feature realize it's not.
>
> The fact that there is some plan to exist out of tree is a bonus,
> however the fact that this is not a first class feature in Nova really
> does need to be signaled. It hasn't been.
>
> Maybe deprecation is the wrong tool for that, and marking EC2 as
> experimental and non supported in the log message is more appropriate.
>

I think that perhaps something that shouldn't be lost site of is that the
users using the EC2 API are using it as-is. The only commitment that needs
to be made is to maintain the functionality that's already there, rather
than attempt to keep it up to scratch with newer functionality that's come
into EC2.

The stackforge project can perhaps be the incubator for the development of
a full replacement which is more up-to-date and interacts more like a
translator. Once it's matured enough that the users want to use it instead
of the old EC2 API in-tree, then perhaps deprecation is the right option.

Between now and then, I must say that I agree with Sean - perhaps the best
strategy would be to make it clear somehow that the EC2 API isn't a fully
tested or up-to-date API.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.versionedobjects repository is ready for pre-import review

2015-02-02 Thread Doug Hellmann


On Mon, Feb 2, 2015, at 04:33 PM, Doug Hellmann wrote:
> I’ve prepared a copy of nova.objects as oslo_versionedobjects in
> https://github.com/dhellmann/oslo.versionedobjects-import. The script to
> create the repository is part of the update to the spec in
> https://review.openstack.org/15.
> 
> Please look over the code so you are familiar with it. Dan and I have
> already talked about the need to rewrite the tests that depend on nova’s
> service code, so those are set to skip for now. We’ll need to do some
> work to make the lib compatible with python 3, so I’ll make sure the
> project-config patch does not enable those tests, yet.
> 
> Please post comments on the code here on the list in case I end up
> needing to rebuild that import repository.
> 
> I’ll give everyone a few days before removing the WIP flag from the infra
> change to import this new repository
> (https://review.openstack.org/151792).

I filed bugs for a few known issues that we'll need to work on before
the first release: https://bugs.launchpad.net/oslo.versionedobjects

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PowerKVM CI reporting

2015-02-02 Thread Kurt Taylor
Thanks Morgan, That's why I wanted to email. We will gladly come to a
meeting and formally request to comment and will turn off commenting on
Keystone until then.

Thanks,
Kurt Taylor (krtaylor)

On Mon, Feb 2, 2015 at 3:43 PM, Morgan Fainberg 
wrote:

> I assumed [my mistake] this was not commenting/reporting, just running
> against Keystone. I expect a more specific request to comment rather than a
> “hey we’re doing this” if commenting is what is desired.
>
> Please come to our weekly meeting if you’re planning on commenting/scoring
> on keystone patches.
>
> --
> Morgan Fainberg
>
> On February 2, 2015 at 1:41:08 PM, Anita Kuno (ante...@anteaya.info)
> wrote:
>
> On 02/02/2015 02:16 PM, Morgan Fainberg wrote:
> > Thank you for the heads up.
> >
> > —Morgan
> >
> > --
> > Morgan Fainberg
> >
> > On February 2, 2015 at 1:15:49 PM, Kurt Taylor (kurt.r.tay...@gmail.com)
> wrote:
> >
> > Just FYI, in case there was any questions,
> >
> > In addition to testing and reporting on Nova, the IBM PowerKVM CI system
> is now also testing against Keystone patches.
> >
> > We are happy to also be testing keystone patches on PowerKVM, and will
> be adding other projects soon.
> >
> > Regards,
> > Kurt Taylor (krtaylor)
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> Requesting permission to comment on a new repo is best done at the
> weekly meeting of the project in question, not the mailing list.
>
> Thanks,
> Anita.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiple template libraries being used in tree

2015-02-02 Thread Sean Dague
On 02/02/2015 04:20 PM, Mark McClain wrote:
> You’re right that the Mako dependency is really a side effect from Alembic.  
> We used jinja for tempting radvd because it is used by the projects within 
> the OpenStack ecosystem and also used in VPNaaS.

Jinja is far more used in other parts of OpenStack from my recollection,
I think that's probably the prefered thing to consolidate on.

Alembic being different is fine, it's a dependent library.

-Sean

> mark
> 
> 
>> On Feb 2, 2015, at 3:13 PM, Sean M. Collins  wrote:
>>
>> Sorry, I should have done a bit more grepping before I sent the e-mail,
>> since it appears that Mako is being used by alembic.
>>
>> http://alembic.readthedocs.org/en/latest/tutorial.html
>>
>> So, should we switch the radvd templating over to Mako instead?
>>
>> -- 
>> Sean M. Collins
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PowerKVM CI reporting

2015-02-02 Thread Morgan Fainberg
I assumed [my mistake] this was not commenting/reporting, just running against 
Keystone. I expect a more specific request to comment rather than a “hey we’re 
doing this” if commenting is what is desired.

Please come to our weekly meeting if you’re planning on commenting/scoring on 
keystone patches.

-- 
Morgan Fainberg

On February 2, 2015 at 1:41:08 PM, Anita Kuno (ante...@anteaya.info) wrote:

On 02/02/2015 02:16 PM, Morgan Fainberg wrote:  
> Thank you for the heads up.  
>  
> —Morgan  
>  
> --  
> Morgan Fainberg  
>  
> On February 2, 2015 at 1:15:49 PM, Kurt Taylor (kurt.r.tay...@gmail.com) 
> wrote:  
>  
> Just FYI, in case there was any questions,  
>  
> In addition to testing and reporting on Nova, the IBM PowerKVM CI system is 
> now also testing against Keystone patches.  
>  
> We are happy to also be testing keystone patches on PowerKVM, and will be 
> adding other projects soon.  
>  
> Regards,  
> Kurt Taylor (krtaylor)  
> __  
> OpenStack Development Mailing List (not for usage questions)  
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
>  
>  
>  
> __  
> OpenStack Development Mailing List (not for usage questions)  
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
>  
Requesting permission to comment on a new repo is best done at the  
weekly meeting of the project in question, not the mailing list.  

Thanks,  
Anita.  

__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-02 Thread Morgan Fainberg
On February 2, 2015 at 1:31:14 PM, Joe Gordon (joe.gord...@gmail.com) wrote:


On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg  
wrote:
I think the simple answer is "yes". We (keystone) should emit notifications. 
And yes other projects should listen.

The only thing really in discussion should be:

1: soft delete or hard delete? Does the service mark it as orphaned, or just 
delete (leave this to nova, cinder, etc to discuss)

2: how to cleanup when an event is missed (e.g rabbit bus goes out to lunch).


I disagree slightly, I don't think projects should directly listen to the 
Keystone notifications I would rather have the API be something from a keystone 
owned library, say keystonemiddleware. So something like this:

from keystonemiddleware import janitor

keystone_janitor = janitor.Janitor()
keystone_janitor.register_callback(nova.tenant_cleanup)

keystone_janitor.spawn_greenthread()

That way each project doesn't have to include a lot of boilerplate code, and 
keystone can easily modify/improve/upgrade the notification mechanism.


Sure. I’d place this into an implementation detail of where that actually 
lives. I’d be fine with that being a part of Keystone Middleware Package 
(probably something separate from auth_token).

—Morgan

 

--Morgan

Sent via mobile

> On Feb 2, 2015, at 10:16, Matthew Treinish  wrote:
>
>> On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
>> This came up in the operators mailing list back in June [1] but given the
>> subject probably didn't get much attention.
>>
>> Basically there is a really old bug [2] from Grizzly that is still a problem
>> and affects multiple projects.  A tenant can be deleted in Keystone even
>> though other resources in other projects are under that project, and those
>> resources aren't cleaned up.
>
> I agree this probably can be a major pain point for users. We've had to work 
> around it
> in tempest by creating things like:
>
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup_service.py
> and
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup.py
>
> to ensure we aren't dangling resources after a run. But, this doesn't work in
> all cases either. (like with tenant isolation enabled)
>
> I also know there is a stackforge project that is attempting something similar
> here:
>
> http://git.openstack.org/cgit/stackforge/ospurge/
>
> It would be much nicer if the burden for doing this was taken off users and 
> this
> was just handled cleanly under the covers.
>
>>
>> Keystone implemented event notifications back in Havana [3] but the other
>> projects aren't listening on them to know when a project has been deleted
>> and act accordingly.
>>
>> The bug has several people saying "we should talk about this at the summit"
>> for several summits, but I can't find any discussion or summit sessions
>> related back to the bug.
>>
>> Given this is an operations and cross-project issue, I'd like to bring it up
>> again for the Vancouver summit if there is still interest (which I'm
>> assuming there is from operators).
>
> I'd definitely support having a cross-project session on this.
>
>>
>> There is a blueprint specifically for the tenant deletion case but it's
>> targeted at only Horizon [4].
>>
>> Is anyone still working on this? Is there sufficient interest in a
>> cross-project session at the L summit?
>>
>> Thinking out loud, even if nova doesn't listen to events from keystone, we
>> could at least have a periodic task that looks for instances where the
>> tenant no longer exists in keystone and then take some action (log a
>> warning, shutdown/archive/, reap, etc).
>>
>> There is also a spec for L to transfer instance ownership [5] which could
>> maybe come into play, but I wouldn't depend on it.
>>
>> [1] 
>> http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.html
>> [2] https://bugs.launchpad.net/nova/+bug/967832
>> [3] https://blueprints.launchpad.net/keystone/+spec/notifications
>> [4] https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
>> [5] https://review.openstack.org/#/c/105367/
>
> -Matt Treinish
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PowerKVM CI reporting

2015-02-02 Thread Anita Kuno
On 02/02/2015 02:16 PM, Morgan Fainberg wrote:
> Thank you for the heads up. 
> 
> —Morgan
> 
> -- 
> Morgan Fainberg
> 
> On February 2, 2015 at 1:15:49 PM, Kurt Taylor (kurt.r.tay...@gmail.com) 
> wrote:
> 
> Just FYI, in case there was any questions,
> 
> In addition to testing and reporting on Nova, the IBM PowerKVM CI system is 
> now also testing against Keystone patches.
> 
> We are happy to also be testing keystone patches on PowerKVM, and will be 
> adding other projects soon.
> 
> Regards,
> Kurt Taylor (krtaylor)
> __  
> OpenStack Development Mailing List (not for usage questions)  
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Requesting permission to comment on a new repo is best done at the
weekly meeting of the project in question, not the mailing list.

Thanks,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes

2015-02-02 Thread Rochelle Grober
What I see in this conversation is that we are talking about multiple different 
user classes.

Infra-operator needs as much info as possible, so if it is a vendor driver that 
is erring out, the dev-ops can see it in the log.

Tenant-operator is a totally different class of user.  These guys need VM based 
logs and virtual network based logs, etc., but should never see as far under 
the covers as the infra-ops *has* to see.

So, sounds like a security policy issue of what makes it to tenant logs and 
what stays "in the data center" thing.  

There are *lots* of logs that are being generated.  It sounds like we need 
standards on what goes into which logs along with error codes, 
logging/reporting levels, criticality, etc.

--Rocky

(bcc'ing the ops list so they can join this discussion, here)

-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: Monday, February 02, 2015 8:19 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes

On 02/01/2015 06:20 PM, Morgan Fainberg wrote:
> Putting on my "sorry-but-it-is-my-job-to-get-in-your-way" hat (aka security), 
> let's be careful how generous we are with the user and data we hand back. It 
> should give enough information to be useful but no more. I don't want to see 
> us opened to weird attack vectors because we're exposing internal state too 
> generously. 
> 
> In short let's aim for a slow roll of extra info in, and evaluate each data 
> point we expose (about a failure) before we do so. Knowing more about a 
> failure is important for our users. Allowing easy access to information that 
> could be used to attack / increase impact of a DOS could be bad. 
> 
> I think we can do it but it is important to not swing the pendulum too far 
> the other direction too fast (give too much info all of a sudden). 

Security by cloud obscurity?

I agree we should evaluate information sharing with security in mind.
However, the black boxing level we have today is bad for OpenStack. At a
certain point once you've added so many belts and suspenders, you can no
longer walk normally any more.

Anyway, lets stop having this discussion in abstract and actually just
evaluate the cases in question that come up.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] oslo.versionedobjects repository is ready for pre-import review

2015-02-02 Thread Doug Hellmann
I’ve prepared a copy of nova.objects as oslo_versionedobjects in 
https://github.com/dhellmann/oslo.versionedobjects-import. The script to create 
the repository is part of the update to the spec in 
https://review.openstack.org/15.

Please look over the code so you are familiar with it. Dan and I have already 
talked about the need to rewrite the tests that depend on nova’s service code, 
so those are set to skip for now. We’ll need to do some work to make the lib 
compatible with python 3, so I’ll make sure the project-config patch does not 
enable those tests, yet.

Please post comments on the code here on the list in case I end up needing to 
rebuild that import repository.

I’ll give everyone a few days before removing the WIP flag from the infra 
change to import this new repository (https://review.openstack.org/151792).

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-02 Thread Joe Gordon
On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg 
wrote:

> I think the simple answer is "yes". We (keystone) should emit
> notifications. And yes other projects should listen.
>
> The only thing really in discussion should be:
>
> 1: soft delete or hard delete? Does the service mark it as orphaned, or
> just delete (leave this to nova, cinder, etc to discuss)
>
> 2: how to cleanup when an event is missed (e.g rabbit bus goes out to
> lunch).
>


I disagree slightly, I don't think projects should directly listen to the
Keystone notifications I would rather have the API be something from a
keystone owned library, say keystonemiddleware. So something like this:

from keystonemiddleware import janitor

keystone_janitor = janitor.Janitor()
keystone_janitor.register_callback(nova.tenant_cleanup)

keystone_janitor.spawn_greenthread()

That way each project doesn't have to include a lot of boilerplate code,
and keystone can easily modify/improve/upgrade the notification mechanism.



>
> --Morgan
>
> Sent via mobile
>
> > On Feb 2, 2015, at 10:16, Matthew Treinish  wrote:
> >
> >> On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
> >> This came up in the operators mailing list back in June [1] but given
> the
> >> subject probably didn't get much attention.
> >>
> >> Basically there is a really old bug [2] from Grizzly that is still a
> problem
> >> and affects multiple projects.  A tenant can be deleted in Keystone even
> >> though other resources in other projects are under that project, and
> those
> >> resources aren't cleaned up.
> >
> > I agree this probably can be a major pain point for users. We've had to
> work around it
> > in tempest by creating things like:
> >
> >
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup_service.py
> > and
> >
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup.py
> >
> > to ensure we aren't dangling resources after a run. But, this doesn't
> work in
> > all cases either. (like with tenant isolation enabled)
> >
> > I also know there is a stackforge project that is attempting something
> similar
> > here:
> >
> > http://git.openstack.org/cgit/stackforge/ospurge/
> >
> > It would be much nicer if the burden for doing this was taken off users
> and this
> > was just handled cleanly under the covers.
> >
> >>
> >> Keystone implemented event notifications back in Havana [3] but the
> other
> >> projects aren't listening on them to know when a project has been
> deleted
> >> and act accordingly.
> >>
> >> The bug has several people saying "we should talk about this at the
> summit"
> >> for several summits, but I can't find any discussion or summit sessions
> >> related back to the bug.
> >>
> >> Given this is an operations and cross-project issue, I'd like to bring
> it up
> >> again for the Vancouver summit if there is still interest (which I'm
> >> assuming there is from operators).
> >
> > I'd definitely support having a cross-project session on this.
> >
> >>
> >> There is a blueprint specifically for the tenant deletion case but it's
> >> targeted at only Horizon [4].
> >>
> >> Is anyone still working on this? Is there sufficient interest in a
> >> cross-project session at the L summit?
> >>
> >> Thinking out loud, even if nova doesn't listen to events from keystone,
> we
> >> could at least have a periodic task that looks for instances where the
> >> tenant no longer exists in keystone and then take some action (log a
> >> warning, shutdown/archive/, reap, etc).
> >>
> >> There is also a spec for L to transfer instance ownership [5] which
> could
> >> maybe come into play, but I wouldn't depend on it.
> >>
> >> [1]
> http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.html
> >> [2] https://bugs.launchpad.net/nova/+bug/967832
> >> [3] https://blueprints.launchpad.net/keystone/+spec/notifications
> >> [4] https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
> >> [5] https://review.openstack.org/#/c/105367/
> >
> > -Matt Treinish
> > ___
> > OpenStack-operators mailing list
> > openstack-operat...@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiple template libraries being used in tree

2015-02-02 Thread Mark McClain
You’re right that the Mako dependency is really a side effect from Alembic.  We 
used jinja for tempting radvd because it is used by the projects within the 
OpenStack ecosystem and also used in VPNaaS.

mark


> On Feb 2, 2015, at 3:13 PM, Sean M. Collins  wrote:
> 
> Sorry, I should have done a bit more grepping before I sent the e-mail,
> since it appears that Mako is being used by alembic.
> 
> http://alembic.readthedocs.org/en/latest/tutorial.html
> 
> So, should we switch the radvd templating over to Mako instead?
> 
> -- 
> Sean M. Collins
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PowerKVM CI reporting

2015-02-02 Thread Morgan Fainberg
Thank you for the heads up. 

—Morgan

-- 
Morgan Fainberg

On February 2, 2015 at 1:15:49 PM, Kurt Taylor (kurt.r.tay...@gmail.com) wrote:

Just FYI, in case there was any questions,

In addition to testing and reporting on Nova, the IBM PowerKVM CI system is now 
also testing against Keystone patches.

We are happy to also be testing keystone patches on PowerKVM, and will be 
adding other projects soon.

Regards,
Kurt Taylor (krtaylor)
__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] PowerKVM CI reporting

2015-02-02 Thread Kurt Taylor
Just FYI, in case there was any questions,

In addition to testing and reporting on Nova, the IBM PowerKVM CI system is
now also testing against Keystone patches.

We are happy to also be testing keystone patches on PowerKVM, and will be
adding other projects soon.

Regards,
Kurt Taylor (krtaylor)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Jeremy Stanley
On 2015-02-02 23:29:55 +0300 (+0300), Alexandre Levine wrote:
> I'll do that when I've got myself acquainted with the weekly meetings
> procedure (haven't actually bumped into it before) :)
[...]

Start from the https://wiki.openstack.org/wiki/Meetings page
preamble and follow the instructions linked from it.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Alexandre Levine


On 2/2/15 11:15 PM, Michael Still wrote:

On Mon, Feb 2, 2015 at 11:01 PM, Alexandre Levine
 wrote:

Michael,

I'm rather new here, especially in regard to communication matters, so I'd
also be glad to understand how it's done and then I can drive it if it's ok
with everybody.
By saying EC2 sub team - who did you keep in mind? From my team 3 persons
are involved.

I see the sub team as the way of keeping the various organisations who
have expressed interest in helping pulling in the same direction. I'd
suggest you pick a free slot on our meeting calendar and run an irc
meeting there weekly to track overall progress.


I'll do that when I've got myself acquainted with the weekly meetings 
procedure (haven't actually bumped into it before) :)



 From the technical point of view the transition plan could look somewhat
like this (sequence can be different):

1. Triage EC2 bugs and fix showstoppers in nova's EC2.
2. Contribute Tempest tests for EC2 functionality and employ them against
nova's EC2.
3. Write spec for required API to be exposed from nova so that we get full
info.
4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
6. Communicate and discover all of the existing questions and problematic
points for the switching from existing EC2 API to the new one. Provide
solutions or decisions about them.
7. Do performance testing of the new stackforge/ec2 and provide fixes if any
bottlenecks come up.
8. Have all of the above prepared for the Vancouver summit and discuss the
situation there.

This sounds really good to me -- this is the sort of thing you'd be
tracking against in that irc meeting, although presumably you'd
negotiate as a group exactly what the steps are and who is working on
what.

Do you see transitioning users to the external EC2 implementation as a
final step in this list? I know you've only gone as far as Vancouver
here, but I want to be explicit about the intended end goal.


Yes, that's correct. The very final step though would be cleaning up 
nova from the EC2 stuff. But you're right, the major goal would be to 
make external EC2 API production-ready and to have all of the necessary 
means for users to seamlessly transition (no downtimes, no instances 
recreation required).

So I can point at least three distinct major milestones here:

1. EC2 API in nova is back and revived (no showstoppers, all of the 
currently employed functionality safe and sound, new tests added to 
check and ensure that).

2. External EC2 API is production-ready.
3. Nova is relieved of the EC2 stuff.

Vancouver is somewhere in between 1 and 3.



Michael, I am still wondering, who's going to be responsible for timely
reviews and approvals of the fixes and tests we're going to contribute to
nova? So far this is the biggest risk. Is there anyway to allow some of us
to participate in the process?

Sean has offered here, for which I am grateful. Your team as it forms
should also start reviewing each other's work, as that will reduce the
workload somewhat for Sean and other cores.


We've already started.


I think given the level of interest here we can have a serious
discussion at Vancouver about if EC2 should be nominated as a priority
task for the L release, which is our more formal way of cementing this
at the beginning of a release cycle.

Thanks again to everyone who has volunteered to help out with this.
35% of our users are grateful!

Michael



On 2/2/15 2:46 AM, Michael Still wrote:

So, its exciting to me that we seem to developing more forward
momentum here. I personally think the way forward is a staged
transition from the in-nova EC2 API to the stackforge project, with
testing added to ensure that we are feature complete between the two.
I note that Soren disagrees with me here, but that's ok -- I'd like to
see us work through that as a team based on the merits.

So... It sounds like we have an EC2 sub team forming. How do we get
that group meeting to come up with a transition plan?

Michael

On Sun, Feb 1, 2015 at 4:12 AM, Davanum Srinivas 
wrote:

Alex,

Very cool. thanks.

-- dims

On Sat, Jan 31, 2015 at 1:04 AM, Alexandre Levine
 wrote:

Davanum,

Now that the picture with the both EC2 API solutions has cleared up a
bit, I
can say yes, we'll be adding the tempest tests and doing devstack
integration.

Best regards,
Alex Levine

On 1/31/15 2:21 AM, Davanum Srinivas wrote:

Alexandre, Randy,

Are there plans afoot to add support to switch on stackforge/ec2-api
in devstack? add tempest tests etc? CI Would go a long way in
alleviating concerns i think.

thanks,
dims

On Fri, Jan 30, 2015 at 1:24 PM, Bias, Randy 
wrote:

As you know we have been driving forward on the stack forge project
and
it¹s our intention to continue to support it over time, plus
reinvigorate
the GCE APIs when that makes sense. So we¹re supportive of deprecating
from Nova to focus on EC2 API in Nova.  I also think it¹s good for
these
APIs to b

Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Daniel P. Berrange
On Mon, Feb 02, 2015 at 01:21:31PM -0500, Andrew Laski wrote:
> 
> On 02/02/2015 11:26 AM, Daniel P. Berrange wrote:
> >On Mon, Feb 02, 2015 at 11:19:45AM -0500, Andrew Laski wrote:
> >>On 02/02/2015 05:58 AM, Daniel P. Berrange wrote:
> >>>On Sun, Feb 01, 2015 at 11:20:08AM -0800, Noel Burton-Krahn wrote:
> Thanks for bringing this up, Daniel.  I don't think it makes sense to have
> a timeout on live migration, but operators should be able to cancel it,
> just like any other unbounded long-running process.  For example, there's
> no timeout on file transfers, but they need an interface report progress
> and to cancel them.  That would imply an option to cancel evacuation too.
> >>>There has been periodic talk about a generic "tasks API" in Nova for 
> >>>managing
> >>>long running operations and getting information about their progress, but I
> >>>am not sure what the status of that is. It would obviously be applicable to
> >>>migration if that's a route we took.
> >>Currently the status of a tasks API is that it would happen after the API
> >>v2.1 microversions work has created a suitable framework in which to add
> >>tasks to the API.
> >So is all work on tasks blocked by the microversions support ? I would have
> >though that would only block places where we need to modify existing APIs.
> >Are we not able to add APIs for listing / cancelling tasks as new APIs
> >without such a dependency on microversions ?
> 
> Tasks work is certainly not blocked on waiting for microversions. There is a
> large amount of non API facing work that could be done to move forward the
> idea of a task driving state changes within Nova. I would very likely be
> working on that if I wasn't currently spending much of my time on cells v2.

Ok, thanks for the info. So from the POV of migration, I'll focus on the
non-API stuff, and expect the tasks work to provide the API mechanisms

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Michael Still
On Mon, Feb 2, 2015 at 11:01 PM, Alexandre Levine
 wrote:
> Michael,
>
> I'm rather new here, especially in regard to communication matters, so I'd
> also be glad to understand how it's done and then I can drive it if it's ok
> with everybody.
> By saying EC2 sub team - who did you keep in mind? From my team 3 persons
> are involved.

I see the sub team as the way of keeping the various organisations who
have expressed interest in helping pulling in the same direction. I'd
suggest you pick a free slot on our meeting calendar and run an irc
meeting there weekly to track overall progress.

> From the technical point of view the transition plan could look somewhat
> like this (sequence can be different):
>
> 1. Triage EC2 bugs and fix showstoppers in nova's EC2.
> 2. Contribute Tempest tests for EC2 functionality and employ them against
> nova's EC2.
> 3. Write spec for required API to be exposed from nova so that we get full
> info.
> 4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
> 5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
> 6. Communicate and discover all of the existing questions and problematic
> points for the switching from existing EC2 API to the new one. Provide
> solutions or decisions about them.
> 7. Do performance testing of the new stackforge/ec2 and provide fixes if any
> bottlenecks come up.
> 8. Have all of the above prepared for the Vancouver summit and discuss the
> situation there.

This sounds really good to me -- this is the sort of thing you'd be
tracking against in that irc meeting, although presumably you'd
negotiate as a group exactly what the steps are and who is working on
what.

Do you see transitioning users to the external EC2 implementation as a
final step in this list? I know you've only gone as far as Vancouver
here, but I want to be explicit about the intended end goal.

> Michael, I am still wondering, who's going to be responsible for timely
> reviews and approvals of the fixes and tests we're going to contribute to
> nova? So far this is the biggest risk. Is there anyway to allow some of us
> to participate in the process?

Sean has offered here, for which I am grateful. Your team as it forms
should also start reviewing each other's work, as that will reduce the
workload somewhat for Sean and other cores.

I think given the level of interest here we can have a serious
discussion at Vancouver about if EC2 should be nominated as a priority
task for the L release, which is our more formal way of cementing this
at the beginning of a release cycle.

Thanks again to everyone who has volunteered to help out with this.
35% of our users are grateful!

Michael


> On 2/2/15 2:46 AM, Michael Still wrote:
>>
>> So, its exciting to me that we seem to developing more forward
>> momentum here. I personally think the way forward is a staged
>> transition from the in-nova EC2 API to the stackforge project, with
>> testing added to ensure that we are feature complete between the two.
>> I note that Soren disagrees with me here, but that's ok -- I'd like to
>> see us work through that as a team based on the merits.
>>
>> So... It sounds like we have an EC2 sub team forming. How do we get
>> that group meeting to come up with a transition plan?
>>
>> Michael
>>
>> On Sun, Feb 1, 2015 at 4:12 AM, Davanum Srinivas 
>> wrote:
>>>
>>> Alex,
>>>
>>> Very cool. thanks.
>>>
>>> -- dims
>>>
>>> On Sat, Jan 31, 2015 at 1:04 AM, Alexandre Levine
>>>  wrote:

 Davanum,

 Now that the picture with the both EC2 API solutions has cleared up a
 bit, I
 can say yes, we'll be adding the tempest tests and doing devstack
 integration.

 Best regards,
Alex Levine

 On 1/31/15 2:21 AM, Davanum Srinivas wrote:
>
> Alexandre, Randy,
>
> Are there plans afoot to add support to switch on stackforge/ec2-api
> in devstack? add tempest tests etc? CI Would go a long way in
> alleviating concerns i think.
>
> thanks,
> dims
>
> On Fri, Jan 30, 2015 at 1:24 PM, Bias, Randy 
> wrote:
>>
>> As you know we have been driving forward on the stack forge project
>> and
>> it¹s our intention to continue to support it over time, plus
>> reinvigorate
>> the GCE APIs when that makes sense. So we¹re supportive of deprecating
>> from Nova to focus on EC2 API in Nova.  I also think it¹s good for
>> these
>> APIs to be able to iterate outside of the standard release cycle.
>>
>>
>>
>> --Randy
>>
>> VP, Technology, EMC Corporation
>> Formerly Founder & CEO, Cloudscaling (now a part of EMC)
>> +1 (415) 787-2253 [google voice]
>> TWITTER: twitter.com/randybias
>> LINKEDIN: linkedin.com/in/randybias
>> ASSISTANT: ren...@emc.com
>>
>>
>>
>>
>>
>>
>> On 1/29/15, 4:01 PM, "Michael Still"  wrote:
>>
>>> Hi,
>>>
>>> as you might have read on openstack-dev, the Nova EC2 API

Re: [openstack-dev] [Neutron] Multiple template libraries being used in tree

2015-02-02 Thread Sean M. Collins
Sorry, I should have done a bit more grepping before I sent the e-mail,
since it appears that Mako is being used by alembic.

http://alembic.readthedocs.org/en/latest/tutorial.html

So, should we switch the radvd templating over to Mako instead?

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Multiple template libraries being used in tree - Switch to using only Jinja2?

2015-02-02 Thread Sean M. Collins
Hi,

During my review of the full-stack tests framework[1], I noticed that
Mako was being added as an explicit dependency. I know that in the code
for creating radvd configs for IPv6, we use Jinja, but I did a quick
git grep and see that we have one file[2] that uses Mako for templating.

My intention is to replace the one file that uses Mako with Jinja2, to
keep things consistent.

Thoughts?

[1]: https://review.openstack.org/#/c/128259/
[2]: 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/migration/alembic_migrations/script.py.mako
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Schedule for Trove Mid-Cycle Sprint

2015-02-02 Thread Nikhil Manchanda
Hi folks:

I've updated the schedule for the Trove Mid-Cycle Sprint at
https://wiki.openstack.org/wiki/Sprints/TroveKiloSprint#Schedule
and have linked the slots on the time-table to the etherpads that we're
planning on using to track the discussion.

I've also updated the page with some more information about remote
participation in case you're not able to make it to the mid-cycle
location (Seattle, WA) in person.

Hope to see many of you tomorrow at the mid-cycle sprint.

Cheers,
Nikhil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-02-02 Thread Stefano Maffulli
On Fri, 2015-01-30 at 23:05 +, Everett Toews wrote:
> To converge the OpenStack APIs to a consistent and pragmatic RESTful
> design by creating guidelines that the projects should follow. The
> intent is not to create backwards incompatible changes in existing
> APIs, but to have new APIs and future versions of existing APIs
> converge.

It's looking good already. I think it would be good also to mention the
end-recipients of the consistent and pragmatic RESTful design so that
whoever reads the mission is reminded why that's important. Something
like:

To improve developer experience converging the OpenStack API to
a consistent and pragmatic RESTful design. The working group
creates guidelines that all OpenStack projects should follow,
avoids introducing backwards incompatible changes in existing
APIs and promotes convergence of new APIs and future versions of
existing APIs.

more or less...

/stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Manila driver for CephFS

2015-02-02 Thread Jake Kugel
OK, thanks Sebastien and Valeriy.

Jake


Sebastien Han  wrote on 02/02/2015 06:51:10 
AM:

> From: Sebastien Han 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 02/02/2015 06:54 AM
> Subject: Re: [openstack-dev] [Manila] Manila driver for CephFS
> 
> I believe this will start somewhere after Kilo.
> 
> > On 28 Jan 2015, at 22:59, Valeriy Ponomaryov 
>  wrote:
> > 
> > Hello Jake,
> > 
> > Main thing, that should be mentioned, is that blueprint has no 
> assignee. Also, It is created long time ago without any activity after 
it.
> > I did not hear any intentions about it, moreover did not see some,
> at least, drafts.
> > 
> > So, I guess, it is open for volunteers.
> > 
> > Regards,
> > Valeriy Ponomaryov
> > 
> > On Wed, Jan 28, 2015 at 11:30 PM, Jake Kugel  
wrote:
> > Hi,
> > 
> > I see there is a blueprint for a Manila driver for CephFS here [1]. It
> > looks like it was opened back in 2013 but still in Drafting state. 
Does
> > anyone know more status about this one?
> > 
> > Thank you,
> > -Jake
> > 
> > [1]  https://blueprints.launchpad.net/manila/+spec/cephfs-driver
> > 
> > 
> > 
__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> Cheers.
> 
> Sébastien Han
> Cloud Architect
> 
> "Always give 100%. Unless you're giving blood."
> 
> Phone: +33 (0)1 49 70 99 72
> Mail: sebastien@enovance.com
> Address : 11 bis, rue Roquépine - 75008 Paris
> Web : www.enovance.com - Twitter : @enovance
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Talk on Jinja Metatemplates for upcoming summit

2015-02-02 Thread Pavlo Shchelokovskyy
Hi Pratik,

what would be the aim for this templating? I ask since we in Heat try to
keep the imperative logic like e.g. if-else out of heat templates, leaving
it to other services. Plus there is already a spec for a heat template
function to repeat pieces of template structure [1].

I can definitely say that some other OpenStack projects that are consumers
of Heat will be interested - Trove already tries to use Jinja templates to
create Heat templates [2], and possibly Sahara and Murano might be
interested as well (I suspect though the latter already uses YAQL for that).

[1] https://review.openstack.org/#/c/140849/
[2]
https://github.com/openstack/trove/blob/master/trove/templates/default.heat.template

Best regards,

Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com

On Mon, Feb 2, 2015 at 8:29 PM, Pratik Mallya 
wrote:

> Hello Heat Developers,
>
> As part of an internal development project at Rackspace, I implemented a
> mechanism to allow using Jinja templating system in heat templates. I was
> hoping to give a talk on the same for the upcoming summit (which will be
> the first summit after I started working on openstack). Have any of you
> worked/ are working on something similar? If so, could you please contact
> me and we can maybe propose a joint talk? :-)
>
> Please let me know! It’s been interesting work and I hope the community
> will be excited to see it.
>
> Thanks!
> -Pratik
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with instance consoles and novnc

2015-02-02 Thread Chris Friesen

On 02/02/2015 01:27 PM, Mathieu Gagné wrote:

On 2015-02-02 11:36 AM, Chris Friesen wrote:

On 01/30/2015 06:26 AM, Jesse Pretorius wrote:


Have you tried manually updating the NoVNC and websockify files to later
versions from source?


We were already using a fairly recent version of websockify, but it
turns out that we needed to upversion the novnc package.



Which version are you using?


Pretty sure we're on 0.5.1 now.

Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with instance consoles and novnc

2015-02-02 Thread Mathieu Gagné

On 2015-02-02 11:36 AM, Chris Friesen wrote:

On 01/30/2015 06:26 AM, Jesse Pretorius wrote:


Have you tried manually updating the NoVNC and websockify files to later
versions from source?


We were already using a fairly recent version of websockify, but it
turns out that we needed to upversion the novnc package.



Which version are you using?

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About Sahara Oozie plan

2015-02-02 Thread Trevor McKay
Answers to other questions:

2) (first part) Yes, I think Oozie shell actions are a great idea. I can
help work on a spec for this.

In general, Sahara should be able to support any kind of Oozie action.
Each will require a new job type, changes to the Oozie engine, and a UI
form to handle submission. We talked about shell actions once upon a
time. I don't think a spec for that will be too difficult.

Typically when adding new Oozie actions, I start by running things with
the Oozie command line to figure out what's possible and what the
workflow.xml looks like in general.


We also talked about allowing a user to upload raw workflows -- the
difficulty there is figuring out what Sahara generates vs what the user
writes, so this may be a more complicated topic. I think it will have to
wait for another cycle.

2) (error information)

Yes, the lack of good error information is a big problem in my opinion,
but we have no plan for it at this time.

The OpenStack approach seems to be to look through lots of log files to
identify errors.  For EDP, we may need to support a similar approach by
allowing job logs to be easily retrieved from clusters and written
somewhere a user can parse through them for error information.  Any
ideas on how to do this are welcome.

Trevor

-- 

(2) Sahara oozie plan

So when I search the solution for HBase test case, I found
http://archive.cloudera.com/cdh5/cdh/5/oozie/DG_ShellActionExtension.html , it 
talks about oozie shell action job type, I believe my first issue in EDP job in 
java action can be solved by shell action, because I can set java 
`hbase classpath` in workflow.xml, just like the way I run 
this jar in the vm console by command. So I raise a bp for adding oozie shell 
action https://blueprints.launchpad.net/sahara/+spec/add-edp-shell-action  I 
will make further research on the bp/specs and update the spec. In today’s 
meeting , you mentioned about allow user to upload his own workflow.xml, I am 
interesting about this , we can provide our support to this part, so can you 
provide some bp/specs or other docs for me? So we can discuss for more.

For more, is there any plan to provide edp job error info to the user? I
think this is also important, currently we just have "killed" label, no
more information.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday February 3rd at 19:00 UTC

2015-02-02 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday February 3rd, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

And in case you missed it or would like a refresher, meeting logs and
minutes from our last meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-01-27-19.06.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-01-27-19.06.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-01-27-19.06.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] unable to reproduce bug 1317363‏

2015-02-02 Thread Kevin Benton
The mailing list isn't a great place to discuss reproducing a bug. Post
this comment on the bug report instead of the mailing list. That way the
person who reported it and the ones who triaged it can see this information
and respond. They might not be watching the dev mailing list as closely.



On Mon, Feb 2, 2015 at 10:17 AM, bharath thiruveedula <
bharath_...@hotmail.com> wrote:

> Hi,
>
> I am Bharath Thiruveedula. I am new to openstack neutron and networking. I
> am trying to solve the bug 1317363. But I am unable to reproduce that bug.
> The steps I have done to reproduce:
>
> 1)I have created with network with external = True
> 2)Created a subnet for the above network with CIDR=172.24.4.0/24 with
> gateway-ip =172.24.4.5
> 3)Created the router
> 4)Set the gateway interface to the router
> 5)Tried to change subnet gateway-ip but got this error
> "Gateway ip 172.24.4.7 conflicts with allocation pool
> 172.24.4.6-172.24.4.254"
> I used this command for that
> "neutron subnet-update ff9fe828-9ca2-42c4-9997-3743d8fc0b0c --gateway-ip
> 172.24.4.7"
>
> Can you please help me with this issue?
>
>
> -- Bharath Thiruveedula
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Steve Gordon
- Original Message -
> From: "Ian Wells" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
>
> On 2 February 2015 at 09:49, Chris Friesen 
> wrote:
> 
> > On 02/02/2015 10:51 AM, Jay Pipes wrote:
> >
> >> This is a bug that I discovered when fixing some of the NUMA related nova
> >> objects. I have a patch that should fix it up shortly.
> >>
> >
> > Any chance you could point me at it or send it to me?
> >
> >  This is what happens when we don't have any functional testing of stuff
> >> that is
> >> merged into master...
> >>
> >
> > Indeed.  Does tempest support hugepages/NUMA/pinning?
> >
> 
> This is a running discussion, but largely no - because this is ited to the
> capabilities of the host, there's no guarantee for a given scenario what
> result you would get (because Tempest will run on any hardware).
> 
> If you have test cases that should pass or fail on a NUMA-capable node, can
> you write them up?  We're working on NUMA-specific testing right now
> (though I'm not sure who, specifically, is working on the test case side of
> that).

Vladik and Sean (CC'd) are working on these.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About Sahara Oozie plan

2015-02-02 Thread Trevor McKay
Hi,

  Thanks for your patience.  I have been consumed with spark-swift, but
I can start to address these questions now :)

On (1) (a) below, I will try to reproduce and look at how we can better
support classpath in EDP. I'll let you know what I find.
We may need to add some configuration options for EDP or change how it
works.

On (1) (b) below, in the edp-move-examples.rst spec for Juno we
described a directory structure that could be used
for separating hadoop1 vs hadoop2 specific directories.  Maybe we can do
something similar based on plugins

For instance, if we have some hbase examples, we can make subdirectories
for each plugin.  Common parts can be
shared, plugin-specific files can be stored in the subdirectories.

(and perhaps the "hadoop2" example already there should just be a
subdirectory under "edp-java")

Best,

Trevor

--

Hi McKay
Thx for your support
I will talk details of these items as below:

(1) EDP job in Java action

   The background is that we want write integration test case for newly
added services like HBase, zookeeper just like the way the edp-examples
does( sample code under sahara/etc/edp-examples/). So I thought I can
wrote an example via edp job by Java action to test HBase Service, then
I wrote the HBaseTest.java and packaged as a jar file, and run this jar
manually with the command "java -cp `hbase classpath` HBaseTest.jar
HBaseTest", it works well in the vm(provisioned by sahara with cdh
plugin).
“/usr/lib/jvm/java-7-oracle-cloudera/bin/java -cp "HBaseTest.jar:`hbase
classpath`" HBaseTest”
So I want run this job via horizon in sahara job execution page, but
found no place to pass the `hbase classpath` parameter.(I have tried
java_opt and configuration and args, all failed). When I pass the “-cp
`hbase classpath`” to java_opts in horizon job execution page. Oozie
raise this error as below

“2015-01-15 16:43:26,074 WARN
org.apache.oozie.action.hadoop.JavaActionExecutor:
SERVER[hbase-master-copy-copy-001.novalocal] USER[hdfs] GROUP[-] TOKEN[]
APP[job-wf] JOB[045-150105050354389-oozie-oozi-W]
ACTION[045-150105050354389-oozie-oozi-W@job-node] LauncherMapper
died, check Hadoop LOG for job
[hbase-master-copy-copy-001.novalocal:8032:job_1420434100219_0054]
2015-01-15 16:43:26,172 INFO
org.apache.oozie.command.wf.ActionEndXCommand:
SERVER[hbase-master-copy-copy-001.novalocal] USER[hdfs] GROUP[-] TOKEN[]
APP[job-wf] JOB[045-150105050354389-oozie-oozi-W]
ACTION[045-150105050354389-oozie-oozi-W@job-node] ERROR is
considered as FAILED for SLA”

So I stuck with this issue, I can’t write the integration test in sahara
( could not pass the classpath parameter), 
I have check oozie official site .
https://cwiki.apache.org/confluence/display/OOZIE/Java+Cookbook found no
help info.
 
   So about the EDP job in java, I have two problems right now:
a)  How to pass classpath to java action as I mention before. So
this also reminds me that we can allow user to modify or upload this own
workflow.xml, then we can provide more options for user.
b)  I concern that it’s hard to have a common edp-example for HBase
for all plugin(cdh dhp), because the example code depends on third party
jars(for example hbase-client.jar…) and different platform(CDH HDP) they
may have different version hbase-client.jar, for example, cdh use
hbase-client-0.98.6-cdh5.2.1.jar. 

attached is a zip file which contains HBaseTest.jar and the source code.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TaskFlow 0.7.0 released

2015-02-02 Thread Joshua Harlow

The Oslo team is pleased to announce the release of:

TaskFlow 0.7.0: taskflow structured state management library.

For more details, please see the git log history below and:

http://launchpad.net/taskflow/+milestone/0.7.0

Please report issues through launchpad:

http://bugs.launchpad.net/taskflow/

Noteable changes


* Using non-deprecated oslo.utils and oslo.serialization imports.
* Added note(s) about publicly consumable types into docs.
* Increase robustness of WBE producer/consumers by supporting and using
  the kombu provided feature to retry/ensure on transient/recoverable
  failures (such as timeouts).
* Move the jobboard/job bases to a jobboard/base module and
  move the persistence base to the parent directory (standardizes how
  all pluggable types now have a similiar base module in a similar 
location,

  making the layout of taskflow's codebase easier to understand/follow).
* Add executor statistics, using taskflow.futures executors now provides a
  useful feature to know about the following when using these executors.
  --
  | Statistic | What it is |
  -
  | failures  | How many submissions ended up raising exceptions  |
  | executed  | How many submissions were executed (failed or not)|
  | runtime   | Total runtime of all submissions executed (failed or not) |
  | cancelled | How many submissions were cancelled before executing  |
  -
* The taskflow logger module does not provide a logging adapter [bug]
* Use monotonic time when/if available for stopwatches (py3.3+ natively
  supports this) and other time.time usage (where the usage of 
time.time only

  cares about the duration between two points in time).
* Make all/most usage of type errors follow a similar pattern (exception
  cleanup).

Changes in /homes/harlowja/dev/os/taskflow 0.6.1..0.7.0
---

NOTE: Skipping requirement commits...

19f9674 Abstract out the worker finding from the WBE engine
99b92ae Add and use a nicer kombu message formatter
df6fb03 Remove duplicated 'do' in types documentation
43d70eb Use the class defined constant instead of raw strings
344b3f6 Use kombu socket.timeout alias instead of socket.timeout
d5128cf Stopwatch usage cleanup/tweak
2e43b67 Add note about publicly consumable types
e9226ca Add docstring to wbe proxy to denote not for public use
80888c6 Use monotonic time when/if available
7fe2945 Link WBE docs together better (especially around arguments)
f3a1dcb Emit a warning when no routing keys provided on publish()
802bce9 Center SVG state diagrams
97797ab Use importutils.try_import for optional eventlet imports
84d44fa Shrink the WBE request transition SVG image size
ca82e20 Add a thread bundle helper utility + tests
e417914 Make all/most usage of type errors follow a similar pattern
2f04395 Leave use-cases out of WBE developer documentation
e3e2950 Allow just specifying 'workers' for WBE entrypoint
66fc2df Add comments to runner state machine reaction functions
35745c9 Fix coverage environment
fc9cb88 Use explicit WBE worker object arguments (instead of kwargs)
0672467 WBE documentation tweaks/adjustments
55ad11f Add a WBE request state diagram + explanation
45ef595 Tidy up the WBE cache (now WBE types) module
1469552 Fix leftover/remaining 'oslo.utils' usage
93d73b8 Show the failure discarded (and the future intention)
5773fb0 Use a class provided logger before falling back to module
addc286 Use explicit WBE object arguments (instead of kwargs)
342c59e Fix persistence doc inheritance hierarchy
072210a The gathered runtime is for failures/not failures
410efa7 add clarification re parallel engine
cb27080 Increase robustness of WBE producer/consumers
bb38457 Move implementation(s) to there own sections
f14ee9e Move the jobboard/job bases to a jobboard/base module
ac5345e Have the serial task executor shutdown/restart its executor
426484f Mirror the task executor methods in the retry action
d92c226 Add back a 'eventlet_utils' helper utility module
1ed0f22 Use constants for runner state machine event names
bfc1136 Remove 'SaveOrderTask' and test state in class variables
22eef96 Provide the stopwatch elapsed method a maximum
3968508 Fix unused and conflicting variables
2280f9a Switch to using 'oslo_serialization' vs 'oslo.serialization'
d748db9 Switch to using 'oslo_utils' vs 'oslo.utils'
9c15eff Add executor statistics
bf2f205 Use oslo.utils reflection for class name
9fe99ba Add split time capturing to the stop watch
42a665d Use platform neutral line separator(s)
eb536da Create and use a multiprocessing sync manager subclass
4c756ef Use a single sender
778e210 Include the 'old_state' in all currently provided listeners
c07a96b Update the README.rst with accurate requirements
2f7d86a Include docstrings for parallel engine types/strings supported
0d602a

[openstack-dev] [Keystone] Sample Config Update (until final decision by larger thread occurs)

2015-02-02 Thread Morgan Fainberg
I am making a quick change in how Keystone is handling updates to the sample 
config files until all of those discussion points are addressed in the big 
thread of “how do we handle sample configs".

These changes are just to help limit rebase issues and make contributions a bit 
easier to manage:

1. Please do not update the sample configuration in your main patch chain. 
Update the sample configuration outside (once your changes merge) or at the end 
of the chain.

2. I’ll start -1ing anything that is dependent on a sample.config change, this 
is so that we can avoid rebase nightmares because a lot of things touch the 
sample config.

3. I or one of the keystone core will be attempting to update the sample config 
on a regular basis to catch any updates that were otherwise missed.

4. Please do not add a -1 to a Keystone review for not updating the sample 
config. I’m asking the core team to ignore these -1s (only there because a 
sample config was not updated).

I hope this helps to keep code moving into the repository with fewer headaches. 
Once all the discussion around where sample config files go has been resolved 
(OpenStack wide) these policies are subject to change.

Cheers,
Morgan

-- 
Morgan Fainberg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Alexandre Levine


On 2/2/15 8:30 PM, Matthew Treinish wrote:

On Mon, Feb 02, 2015 at 07:35:46PM +0300, Alexandre Levine wrote:

Thank you Sean.

We'll be tons of EC2 Tempest tests for your attention shortly.
How would you prefer them? In several reviews, I believe. Not in one, right?

Let's take a step back for a sec. How many tests and what kind are we talking
about here?
We've got our root in /tempest/thirdparty/aws/ec2 (which we considered a 
better naming than boto) and it works via botocore (so no boto in any case).

12 files with 79 API tests.
However we've got additionally some amount of complex scenario tests as 
well, unfortunately using boto, not botocore. Most of them though are 
about VPC stuff so those we'll run against our stackforge's EC2 only.


Please let us know where and how to put it.


I'm thinking it might be better to not just try and dump all this stuff in
tempest. While in the past we've just dumped all of this in tempest, moving
forward I don't think that's what we want to be doing. The current ec2 tests
have always felt out of place to me in tempest and historically haven't been
maintained as well as the other tests. If we're talking about ramping up the ec2
testing we probably should look at migrating everything elsewhere, especially
given that it just essentially nova testing. I see 2 better options here: we
either put the tests in the tree for the project with the ec2 implementation, or
we create a new repo like tempest-ec2 for testing this. In either case we'll
leverage tempest-lib to make sure the bits your existing testing is relying on
are consumable outside of the tempest repo.

-Matt Treinish



On 2/2/15 6:55 PM, Sean Dague wrote:

On 02/02/2015 07:01 AM, Alexandre Levine wrote:

Michael,

I'm rather new here, especially in regard to communication matters, so
I'd also be glad to understand how it's done and then I can drive it if
it's ok with everybody.
By saying EC2 sub team - who did you keep in mind? From my team 3
persons are involved.

 From the technical point of view the transition plan could look somewhat
like this (sequence can be different):

1. Triage EC2 bugs and fix showstoppers in nova's EC2.
2. Contribute Tempest tests for EC2 functionality and employ them
against nova's EC2.
3. Write spec for required API to be exposed from nova so that we get
full info.
4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
6. Communicate and discover all of the existing questions and
problematic points for the switching from existing EC2 API to the new
one. Provide solutions or decisions about them.
7. Do performance testing of the new stackforge/ec2 and provide fixes if
any bottlenecks come up.
8. Have all of the above prepared for the Vancouver summit and discuss
the situation there.

Michael, I am still wondering, who's going to be responsible for timely
reviews and approvals of the fixes and tests we're going to contribute
to nova? So far this is the biggest risk. Is there anyway to allow some
of us to participate in the process?

I am happy to volunteer to shephard these reviews. I'll try to keep an
eye on them, and if something is blocking please just ping me directly
on IRC in #openstack-nova or bring them forward to the weekly Nova meeting.

-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Chris Friesen

On 02/02/2015 12:13 PM, Ian Wells wrote:

On 2 February 2015 at 09:49, Chris Friesen 


Indeed.  Does tempest support hugepages/NUMA/pinning?


This is a running discussion, but largely no - because this is ited to the
capabilities of the host, there's no guarantee for a given scenario what result
you would get (because Tempest will run on any hardware).

If you have test cases that should pass or fail on a NUMA-capable node, can you
write them up?  We're working on NUMA-specific testing right now (though I'm not
sure who, specifically, is working on the test case side of that).


I don't really have time to write up individual testcases right now, but I think 
a good start would be to test the following features:



http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/virt-driver-numa-placement.html

http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/virt-driver-vcpu-topology.html

http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html

http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/input-output-based-numa-scheduling.html

http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-cpu-pinning.html

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Talk on Jinja Metatemplates for upcoming summit

2015-02-02 Thread Pratik Mallya
Hello Heat Developers,

As part of an internal development project at Rackspace, I implemented a 
mechanism to allow using Jinja templating system in heat templates. I was 
hoping to give a talk on the same for the upcoming summit (which will be the 
first summit after I started working on openstack). Have any of you worked/ are 
working on something similar? If so, could you please contact me and we can 
maybe propose a joint talk? :-)

Please let me know! It’s been interesting work and I hope the community will be 
excited to see it.

Thanks!
-Pratik 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-02 Thread Morgan Fainberg
I think the simple answer is "yes". We (keystone) should emit notifications. 
And yes other projects should listen. 

The only thing really in discussion should be:

1: soft delete or hard delete? Does the service mark it as orphaned, or just 
delete (leave this to nova, cinder, etc to discuss)

2: how to cleanup when an event is missed (e.g rabbit bus goes out to lunch). 

--Morgan 

Sent via mobile

> On Feb 2, 2015, at 10:16, Matthew Treinish  wrote:
> 
>> On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
>> This came up in the operators mailing list back in June [1] but given the
>> subject probably didn't get much attention.
>> 
>> Basically there is a really old bug [2] from Grizzly that is still a problem
>> and affects multiple projects.  A tenant can be deleted in Keystone even
>> though other resources in other projects are under that project, and those
>> resources aren't cleaned up.
> 
> I agree this probably can be a major pain point for users. We've had to work 
> around it
> in tempest by creating things like:
> 
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup_service.py
> and
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup.py
> 
> to ensure we aren't dangling resources after a run. But, this doesn't work in
> all cases either. (like with tenant isolation enabled)
> 
> I also know there is a stackforge project that is attempting something similar
> here:
> 
> http://git.openstack.org/cgit/stackforge/ospurge/
> 
> It would be much nicer if the burden for doing this was taken off users and 
> this
> was just handled cleanly under the covers.
> 
>> 
>> Keystone implemented event notifications back in Havana [3] but the other
>> projects aren't listening on them to know when a project has been deleted
>> and act accordingly.
>> 
>> The bug has several people saying "we should talk about this at the summit"
>> for several summits, but I can't find any discussion or summit sessions
>> related back to the bug.
>> 
>> Given this is an operations and cross-project issue, I'd like to bring it up
>> again for the Vancouver summit if there is still interest (which I'm
>> assuming there is from operators).
> 
> I'd definitely support having a cross-project session on this.
> 
>> 
>> There is a blueprint specifically for the tenant deletion case but it's
>> targeted at only Horizon [4].
>> 
>> Is anyone still working on this? Is there sufficient interest in a
>> cross-project session at the L summit?
>> 
>> Thinking out loud, even if nova doesn't listen to events from keystone, we
>> could at least have a periodic task that looks for instances where the
>> tenant no longer exists in keystone and then take some action (log a
>> warning, shutdown/archive/, reap, etc).
>> 
>> There is also a spec for L to transfer instance ownership [5] which could
>> maybe come into play, but I wouldn't depend on it.
>> 
>> [1] 
>> http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.html
>> [2] https://bugs.launchpad.net/nova/+bug/967832
>> [3] https://blueprints.launchpad.net/keystone/+spec/notifications
>> [4] https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
>> [5] https://review.openstack.org/#/c/105367/
> 
> -Matt Treinish
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] UpgradeImpact: Replacing swift_enable_net with swift_store_endpoint

2015-02-02 Thread Jesse Cook
Configuration options will change (https://review.openstack.org/#/c/146972/4):

- Removed config option: "swift_enable_snet". The default value of
  "swift_enable_snet" was False [1]. The comments indicated not to change this
  default value unless you are Rackspace [2].

- Added config option "swift_store_endpoint". The default value of
  "swift_store_endpoint" is None, in which case the storage url from the auth
  response will be used. If set, the configured endpoint will be used. Example
  values: "swift_store_endpoint" = "https://www.example.com/v1/not_a_container";

1. 
https://github.com/openstack/glance/blob/fd5a55c7f386a9d9441d5f1291ff6a92f7e6cc1b/etc/glance-api.conf#L525
2. 
https://github.com/openstack/glance/blob/fd5a55c7f386a9d9441d5f1291ff6a92f7e6cc1b/etc/glance-api.conf#L520

If you are using "swift_enable_snet" (i.e. You changed the default config from 
False to True in your deployment) and you are not Rackspace, please respond to 
this thread. Note, this is very unlikely as it is a Rackspace only option and 
documented as such.

Thanks,

Jesse
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Andrew Laski


On 02/02/2015 11:26 AM, Daniel P. Berrange wrote:

On Mon, Feb 02, 2015 at 11:19:45AM -0500, Andrew Laski wrote:

On 02/02/2015 05:58 AM, Daniel P. Berrange wrote:

On Sun, Feb 01, 2015 at 11:20:08AM -0800, Noel Burton-Krahn wrote:

Thanks for bringing this up, Daniel.  I don't think it makes sense to have
a timeout on live migration, but operators should be able to cancel it,
just like any other unbounded long-running process.  For example, there's
no timeout on file transfers, but they need an interface report progress
and to cancel them.  That would imply an option to cancel evacuation too.

There has been periodic talk about a generic "tasks API" in Nova for managing
long running operations and getting information about their progress, but I
am not sure what the status of that is. It would obviously be applicable to
migration if that's a route we took.

Currently the status of a tasks API is that it would happen after the API
v2.1 microversions work has created a suitable framework in which to add
tasks to the API.

So is all work on tasks blocked by the microversions support ? I would have
though that would only block places where we need to modify existing APIs.
Are we not able to add APIs for listing / cancelling tasks as new APIs
without such a dependency on microversions ?


Tasks work is certainly not blocked on waiting for microversions. There 
is a large amount of non API facing work that could be done to move 
forward the idea of a task driving state changes within Nova. I would 
very likely be working on that if I wasn't currently spending much of my 
time on cells v2.




Regards,
Daniel



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] unable to reproduce bug 1317363‏

2015-02-02 Thread bharath thiruveedula
Hi,
I am Bharath Thiruveedula. I am new to openstack neutron and networking. I am 
trying to solve the bug 1317363. But I am unable to reproduce that bug. The 
steps I have done to reproduce:
1)I have created with network with external = True2)Created a subnet for the 
above network with CIDR=172.24.4.0/24 with gateway-ip =172.24.4.53)Created the 
router4)Set the gateway interface to the router5)Tried to change subnet 
gateway-ip but got this error"Gateway ip 172.24.4.7 conflicts with allocation 
pool 172.24.4.6-172.24.4.254"I used this command for that"neutron subnet-update 
ff9fe828-9ca2-42c4-9997-3743d8fc0b0c --gateway-ip 172.24.4.7" 
Can you please help me with this issue?

-- Bharath Thiruveedula   __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-02 Thread Matthew Treinish
On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
> This came up in the operators mailing list back in June [1] but given the
> subject probably didn't get much attention.
> 
> Basically there is a really old bug [2] from Grizzly that is still a problem
> and affects multiple projects.  A tenant can be deleted in Keystone even
> though other resources in other projects are under that project, and those
> resources aren't cleaned up.

I agree this probably can be a major pain point for users. We've had to work 
around it
in tempest by creating things like:

http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup_service.py
and
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup.py

to ensure we aren't dangling resources after a run. But, this doesn't work in
all cases either. (like with tenant isolation enabled)

I also know there is a stackforge project that is attempting something similar
here:

http://git.openstack.org/cgit/stackforge/ospurge/

It would be much nicer if the burden for doing this was taken off users and this
was just handled cleanly under the covers.

> 
> Keystone implemented event notifications back in Havana [3] but the other
> projects aren't listening on them to know when a project has been deleted
> and act accordingly.
> 
> The bug has several people saying "we should talk about this at the summit"
> for several summits, but I can't find any discussion or summit sessions
> related back to the bug.
> 
> Given this is an operations and cross-project issue, I'd like to bring it up
> again for the Vancouver summit if there is still interest (which I'm
> assuming there is from operators).

I'd definitely support having a cross-project session on this.

> 
> There is a blueprint specifically for the tenant deletion case but it's
> targeted at only Horizon [4].
> 
> Is anyone still working on this? Is there sufficient interest in a
> cross-project session at the L summit?
> 
> Thinking out loud, even if nova doesn't listen to events from keystone, we
> could at least have a periodic task that looks for instances where the
> tenant no longer exists in keystone and then take some action (log a
> warning, shutdown/archive/, reap, etc).
> 
> There is also a spec for L to transfer instance ownership [5] which could
> maybe come into play, but I wouldn't depend on it.
> 
> [1] 
> http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.html
> [2] https://bugs.launchpad.net/nova/+bug/967832
> [3] https://blueprints.launchpad.net/keystone/+spec/notifications
> [4] https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
> [5] https://review.openstack.org/#/c/105367/
> 

-Matt Treinish


pgp0Mz2keiApM.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Ian Wells
On 2 February 2015 at 09:49, Chris Friesen 
wrote:

> On 02/02/2015 10:51 AM, Jay Pipes wrote:
>
>> This is a bug that I discovered when fixing some of the NUMA related nova
>> objects. I have a patch that should fix it up shortly.
>>
>
> Any chance you could point me at it or send it to me?
>
>  This is what happens when we don't have any functional testing of stuff
>> that is
>> merged into master...
>>
>
> Indeed.  Does tempest support hugepages/NUMA/pinning?
>

This is a running discussion, but largely no - because this is ited to the
capabilities of the host, there's no guarantee for a given scenario what
result you would get (because Tempest will run on any hardware).

If you have test cases that should pass or fail on a NUMA-capable node, can
you write them up?  We're working on NUMA-specific testing right now
(though I'm not sure who, specifically, is working on the test case side of
that).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Spark CDH followup and questions related to DIB

2015-02-02 Thread Trevor McKay
Hello all,

  I tried a Spark image with the cdh5 element Daniele describes below,
but it did not fix the jackson version issue. The spark assembly still
depends on inconsistent versions.

  Looking into the spark git a little bit more, I discovered that in the
cdh5-1.2.0_5.3.0 branch the jackson version is settable. I built spark
on this branch with jackson 1.9.13 and was able to run Spark EDP without
any classpath manipulations. But, it doesn't appear to be released yet.

  A couple questions come out of this:

1) When do we move to cdh5.3 for spark images? Do we try to do this in
Kilo?

The work is already started, as noted below.  Daniele has done initial
work using cdh5 for the spark plugin and the Intel folks are working on 
cdh5 and cdh5.3 for the CDH plugin.

2) Do we carry a Spark assembly for Sahara ourselves, or wait for a
release tarball from CDH that uses this branch and sets a consistent
jackson version?  

I asked about any plans to release a tarball from this
branch on the apache spark users list, waiting for a response.

One alternative is for us to host our own spark build that we can use in
sahara-image-elements. The other idea is for us to wait for a release
tarball at http://archive.apache.org/dist/spark/ and continue to use the
classpath workaround in spark EDP for the time being.

3) Do we fix up sahara-image-elements to support multiple spark
versions? 

Historically sahara-image-elements only supports a single version for
spark images.  This is different from the other plugins.  Since we have
agreed to carry support for a release cycle of older versions after
introducing a new one, should we support both cdh4 and cdh5.x? This will
require changes in diskimage_create.sh.

4) Like #3, do we fix up the spark plugin in Sahara to handle multiple
versions? This is similar to the work the Intel folks are doing now to
separate cdh5 and cdh5.3 code in the cdh plugin.

I am wondering if the above 4 issues result in too much work to add to
kilo-3. Do we make an incremental improvement over Juno, having
spark-swift integration in EDP on cdh4 but without other changes and
address the above issues in L, or do we push on and try to resolve it
all for Kilo?

Best regards,

Trevor

On Wed, 2015-01-28 at 11:57 -0500, Trevor McKay wrote:
> Daniele,
> 
>   Excellent! I'll have to keep a closer eye on bigfoot activity :) I'll
> pursue this.
> 
> Best,
> 
> Trevor
> 
> On Wed, 2015-01-28 at 17:40 +0100, Daniele Venzano wrote:
> > Hello everyone,
> > 
> > there is already some code in our repository:
> > https://github.com/bigfootproject/savanna-image-elements
> > 
> > I did the necessary changes to have the Spark element use the cdh5
> > element. I updated also to Spark 1.2. The old cloudera HDFS-only
> > element is still needed for generating cdh4 images (but probably cdh4
> > support can be thrown away).
> > 
> > Unfortunately I do not have the time to do the necessary
> > testing/validation and submit for review. I also changed the CDH
> > element so that it can install only HDFS, if so required.
> > The changes I made are simple and all contained in the last commit on
> > the master branch of that repo.
> > 
> > The image generated with this code runs in Sahara without any further
> > changes. Feel free to take the code, clean it up and submit for review.
> > 
> > Dan
> > 
> > On Wed, Jan 28, 2015 at 10:43:30AM -0500, Trevor McKay wrote:
> > > Intel folks,
> > > 
> > > Belated welcome to Sahara!  Thank you for your recent commits.
> > > 
> > > Moving this thread to openstack-dev so others may contribute, cc'ing
> > > Daniele and Pietro who pioneered the Spark plugin.
> > > 
> > > I'll respond with another email about Oozie work, but I want to
> > > address the Spark/Swift issue in CDH since I have been working
> > > on it and there is a task which still needs to be done -- that
> > > is to upgrade the CDH version in the spark image and see if
> > > the situation improves (see below)
> > > 
> > > Relevant reviews are here:
> > > 
> > > https://review.openstack.org/146659
> > > https://review.openstack.org/147955
> > > https://review.openstack.org/147985
> > > https://review.openstack.org/146659
> > > 
> > > In the first review, you can see that we set an extra driver
> > > classpath to pull in '/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar.
> > > 
> > > This is because the spark-assembly JAR in CDH4 contains classes from
> > > jackson-mapper-asl-1.8.8 and jackson-core-asl-1.9.x. When the
> > > hadoop-swift.jar dereferences a Swift path, it calls into code
> > > from jackson-mapper-asl-1.8.8 which uses JsonClass.  But JsonClass
> > > was removed in jackson-core-asl-1.9.x, so there is an exception.
> > > 
> > > Therefore, we need to use the classpath to either upgrade the version of
> > > jackson-mapper-asl to 1.9.x or downgrade the version of jackson-core-asl
> > > to 1.8.8 (both work in my testing).  However, the first of these options
> > > requires us to bundle an extra jar.  Since /usr/lib/hadoop

Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Matthew Treinish
On Mon, Feb 02, 2015 at 11:49:26AM -0600, Chris Friesen wrote:
> On 02/02/2015 10:51 AM, Jay Pipes wrote:
> >This is a bug that I discovered when fixing some of the NUMA related nova
> >objects. I have a patch that should fix it up shortly.
> 
> Any chance you could point me at it or send it to me?
> 
> >This is what happens when we don't have any functional testing of stuff that 
> >is
> >merged into master...
> 
> Indeed.  Does tempest support hugepages/NUMA/pinning?

The short answer is not explicitly. The longer answer is that there are 2
patches[1][2] up for review right now that add basic checks to tempest. But,
they haven't been able to merge because the nova support hasn't worked and the
tests fail...

Aside from those 2 basic checks I don't expect any other direct numa, hugepage,
etc. tests to be in tempest. Testing anything besides these basic cases would
require knowledge of the underlying hardware for the deployment, which is out of
scope for tempest. There really needs to be lower level functional testing of
these features.

That being said the other thing you could do using tempest is to configure 
tempest
to use flavors which are created to use numa. That would at least implicitly
test that the functionality would work. But, that really isn't a replacement for
the functional testing which is sorely needed here.

[1] https://review.openstack.org/143540
[2] https://review.openstack.org/#/c/143541/


-Matt Treinish


pgpYTjxjBs7qI.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-02 Thread Matt Riedemann



On 2/2/2015 11:46 AM, Matt Riedemann wrote:

This came up in the operators mailing list back in June [1] but given
the subject probably didn't get much attention.

Basically there is a really old bug [2] from Grizzly that is still a
problem and affects multiple projects.  A tenant can be deleted in
Keystone even though other resources in other projects are under that
project, and those resources aren't cleaned up.

Keystone implemented event notifications back in Havana [3] but the
other projects aren't listening on them to know when a project has been
deleted and act accordingly.

The bug has several people saying "we should talk about this at the
summit" for several summits, but I can't find any discussion or summit
sessions related back to the bug.

Given this is an operations and cross-project issue, I'd like to bring
it up again for the Vancouver summit if there is still interest (which
I'm assuming there is from operators).

There is a blueprint specifically for the tenant deletion case but it's
targeted at only Horizon [4].

Is anyone still working on this? Is there sufficient interest in a
cross-project session at the L summit?

Thinking out loud, even if nova doesn't listen to events from keystone,
we could at least have a periodic task that looks for instances where
the tenant no longer exists in keystone and then take some action (log a
warning, shutdown/archive/, reap, etc).

There is also a spec for L to transfer instance ownership [5] which
could maybe come into play, but I wouldn't depend on it.

[1]
http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.html

[2] https://bugs.launchpad.net/nova/+bug/967832
[3] https://blueprints.launchpad.net/keystone/+spec/notifications
[4] https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
[5] https://review.openstack.org/#/c/105367/



I will apologize ahead of time for saying 'projects' for services like 
nova, glance, cinder, etc, while also talking about projects/tenants in 
keystone, I realize this is confusing. :)


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Chris Friesen

On 02/02/2015 10:51 AM, Jay Pipes wrote:

This is a bug that I discovered when fixing some of the NUMA related nova
objects. I have a patch that should fix it up shortly.


Any chance you could point me at it or send it to me?


This is what happens when we don't have any functional testing of stuff that is
merged into master...


Indeed.  Does tempest support hugepages/NUMA/pinning?

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-02 Thread Matt Riedemann
This came up in the operators mailing list back in June [1] but given 
the subject probably didn't get much attention.


Basically there is a really old bug [2] from Grizzly that is still a 
problem and affects multiple projects.  A tenant can be deleted in 
Keystone even though other resources in other projects are under that 
project, and those resources aren't cleaned up.


Keystone implemented event notifications back in Havana [3] but the 
other projects aren't listening on them to know when a project has been 
deleted and act accordingly.


The bug has several people saying "we should talk about this at the 
summit" for several summits, but I can't find any discussion or summit 
sessions related back to the bug.


Given this is an operations and cross-project issue, I'd like to bring 
it up again for the Vancouver summit if there is still interest (which 
I'm assuming there is from operators).


There is a blueprint specifically for the tenant deletion case but it's 
targeted at only Horizon [4].


Is anyone still working on this? Is there sufficient interest in a 
cross-project session at the L summit?


Thinking out loud, even if nova doesn't listen to events from keystone, 
we could at least have a periodic task that looks for instances where 
the tenant no longer exists in keystone and then take some action (log a 
warning, shutdown/archive/, reap, etc).


There is also a spec for L to transfer instance ownership [5] which 
could maybe come into play, but I wouldn't depend on it.


[1] 
http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.html

[2] https://bugs.launchpad.net/nova/+bug/967832
[3] https://blueprints.launchpad.net/keystone/+spec/notifications
[4] https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
[5] https://review.openstack.org/#/c/105367/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Matthew Treinish
On Mon, Feb 02, 2015 at 08:07:27PM +0300, Alexandre Levine wrote:
> 
> On 2/2/15 7:39 PM, Sean Dague wrote:
> >On 02/02/2015 11:35 AM, Alexandre Levine wrote:
> >>Thank you Sean.
> >>
> >>We'll be tons of EC2 Tempest tests for your attention shortly.
> >>How would you prefer them? In several reviews, I believe. Not in one,
> >>right?
> >>
> >>Best regards,
> >>   Alex Levine
> >So, honestly, I think that we should probably look at getting the ec2
> >tests out of the Tempest tree as well and into a more dedicated place.
> >Like as part of the stackforge project tree. Given that the right
> >expertise would be there as well. It could use tempest-lib for some of
> >the common parts.
> >
> > -Sean
> We tried to find out about tempest-lib, asked Keichi Ohmichi, but it seems
> that's still work in progress. Can you point us somewhere where we can
> understand how to employ this technology.

Tempest-lib is the effort to break out useful pieces from the tempest repo so
that they have stable interfaces and can easily be consumed externally.
Right now it only has some basic functionality in it, but we are working on
expanding it more constantly. If there is a needed feature from inside the
tempest repo which is currently missing from the lib we can work together on
migrating it over faster. 

> So the use cases will be:
> 
> 1. Be able to run the suit against EC2 in nova.
> 2. Be able to run the suit against stackforge/EC2.
> 3. Use that for gating for both repos.

These 3 things are really independent of tempest-lib. There more about how you
configure the test suite to be run. (in general and in the CI) Tempest-lib is
just a library which has the common functionality from tempest that is
generally useful outside of the tempest repo and won't help with how you
configure things to run.

But, if your tests are only interacting with things only through the API 1 and
2 should be as simple as pointing it at different endpoints.

> 
> Additional complication here is that some of the tests will have to skipped
> because of functionality absence or because of bugs in nova's EC2 but should
> be employed against stackforge's version.
> 
> Could you advice how to achieve such effects?

This also is just a matter of how you setup and configure your test jobs and the
test suite. It would be the same pretty much wherever the tests end up. When you
get a test suite setup I can help with setting things up to make this simpler.

If you join the #openstack-qa channel on freenode and we can work through 
exactly
what you're trying to accomplish with a higher throughput.

-Matt Treinish

> 
> >
> >>On 2/2/15 6:55 PM, Sean Dague wrote:
> >>>On 02/02/2015 07:01 AM, Alexandre Levine wrote:
> Michael,
> 
> I'm rather new here, especially in regard to communication matters, so
> I'd also be glad to understand how it's done and then I can drive it if
> it's ok with everybody.
> By saying EC2 sub team - who did you keep in mind? From my team 3
> persons are involved.
> 
>   From the technical point of view the transition plan could look
> somewhat
> like this (sequence can be different):
> 
> 1. Triage EC2 bugs and fix showstoppers in nova's EC2.
> 2. Contribute Tempest tests for EC2 functionality and employ them
> against nova's EC2.
> 3. Write spec for required API to be exposed from nova so that we get
> full info.
> 4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
> 5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
> 6. Communicate and discover all of the existing questions and
> problematic points for the switching from existing EC2 API to the new
> one. Provide solutions or decisions about them.
> 7. Do performance testing of the new stackforge/ec2 and provide fixes if
> any bottlenecks come up.
> 8. Have all of the above prepared for the Vancouver summit and discuss
> the situation there.
> 
> Michael, I am still wondering, who's going to be responsible for timely
> reviews and approvals of the fixes and tests we're going to contribute
> to nova? So far this is the biggest risk. Is there anyway to allow some
> of us to participate in the process?
> >>>I am happy to volunteer to shephard these reviews. I'll try to keep an
> >>>eye on them, and if something is blocking please just ping me directly
> >>>on IRC in #openstack-nova or bring them forward to the weekly Nova
> >>>meeting.
> >>>
> >>> -Sean
> >>>
> >>
> >>__
> >>OpenStack Development Mailing List (not for usage questions)
> >>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-

Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Chris Friesen

On 02/02/2015 11:00 AM, Sahid Orentino Ferdjaoui wrote:

On Mon, Feb 02, 2015 at 10:44:09AM -0600, Chris Friesen wrote:

Hi,

I'm trying to make use of huge pages as described in
"http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html";.
I'm running kilo as of Jan 27th.
I've allocated 1 2MB pages on a compute node.  "virsh capabilities" on that 
node contains:

 
   
 
   67028244
   16032069
   5000
   1
...
 
   67108864
   16052224
   5000
   1


I then restarted nova-compute, I set "hw:mem_page_size=large" on a
flavor, and then tried to boot up an instance with that flavor.  I
got the error logs below in nova-scheduler.  Is this a bug?


Hello,

Launchpad.net could be more appropriate to
discuss on something which looks like a bug.

   https://bugs.launchpad.net/nova/+filebug


Just wanted to make sure I wasn't missing something.  Bug has been opened at 
https://bugs.launchpad.net/nova/+bug/1417201


I added some additional logs to the bug report of what the numa topology looks 
like on the compute node and in NUMATopologyFilter.host_passes().



According to your trace I would say you are running different versions
of Nova services.


nova should all be the same version.  I'm running juno versions of other 
openstack components though.



BTW please verify your version of libvirt. Hugepages is supported
start to 1.2.8 (but this should difinitly not failed so badly like
that)


Libvirt is 1.2.8.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Swift 2.2.2 released today

2015-02-02 Thread John Dickinson
Everyone,

I'm happy to announce that today we have release Swift 2.2.2. (Yes, that's
2.2.2 on 2/2.) This release has a few very important features that came
directly from production clusters. I recommend that you upgrade so you can
take advantage of the new goodness.

As always, you can upgrade to this version of Swift with zero end-user
downtime.

So what's so great in this release? Below are some highlights, but please
read the full changelog at
https://github.com/openstack/swift/blob/master/CHANGELOG

* Data placement changes

  This release has several major changes to data placement in Swift in
  order to better handle different deployment patterns. First, with an
  unbalance-able ring, less partitions will move if the movement doesn't
  result in any better dispersion across failure domains. Also, empty
  (partition weight of zero) devices will no longer keep partitions after
  rebalancing when there is an unbalance-able ring.

  Second, the notion of "overload" has been added to Swift's rings. This
  allows devices to take some extra partitions (more than would normally
  be allowed by the device weight) so that smaller and unbalanced clusters
  will have less data movement between servers, zones, or regions if there
  is a failure in the cluster.

  Finally, rings have a new metric called "dispersion". This is the
  percentage of partitions in the ring that have too many replicas in a
  particular failure domain. For example, if you have three servers in a
  cluster but two replicas for a partition get placed onto the same
  server, that partition will count towards the dispersion metric. A
  lower value is better, and the value can be used to find the proper
  value for "overload".

  The overload and dispersion metrics have been exposed in the
  swift-ring-build CLI tools.

  See http://swift.openstack.org/overview_ring.html
  for more info on how data placement works now.

* Improve container replication for large, out-of-date containers

* Added console logging to swift-drive-audit

* Changed retaliating to support whitelisting and blacklisting based on
  account metadata (sysmeta). Note that the existing config options continue
  to work.

This release is the combined work of 20 developers, including 3 first-time
Swift contributors:

* Harshit Chitalia
* Dhriti Shikhar
* Nicolas Trangez


Thank you to everyone who contributed: developers, support staff, and
operators alike--all of whom helped find and diagnose the problems solved in
this release.

--John









signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Matthew Treinish
On Mon, Feb 02, 2015 at 07:35:46PM +0300, Alexandre Levine wrote:
> Thank you Sean.
> 
> We'll be tons of EC2 Tempest tests for your attention shortly.
> How would you prefer them? In several reviews, I believe. Not in one, right?

Let's take a step back for a sec. How many tests and what kind are we talking
about here?

I'm thinking it might be better to not just try and dump all this stuff in
tempest. While in the past we've just dumped all of this in tempest, moving
forward I don't think that's what we want to be doing. The current ec2 tests
have always felt out of place to me in tempest and historically haven't been
maintained as well as the other tests. If we're talking about ramping up the ec2
testing we probably should look at migrating everything elsewhere, especially
given that it just essentially nova testing. I see 2 better options here: we
either put the tests in the tree for the project with the ec2 implementation, or
we create a new repo like tempest-ec2 for testing this. In either case we'll
leverage tempest-lib to make sure the bits your existing testing is relying on
are consumable outside of the tempest repo.

-Matt Treinish


> On 2/2/15 6:55 PM, Sean Dague wrote:
> >On 02/02/2015 07:01 AM, Alexandre Levine wrote:
> >>Michael,
> >>
> >>I'm rather new here, especially in regard to communication matters, so
> >>I'd also be glad to understand how it's done and then I can drive it if
> >>it's ok with everybody.
> >>By saying EC2 sub team - who did you keep in mind? From my team 3
> >>persons are involved.
> >>
> >> From the technical point of view the transition plan could look somewhat
> >>like this (sequence can be different):
> >>
> >>1. Triage EC2 bugs and fix showstoppers in nova's EC2.
> >>2. Contribute Tempest tests for EC2 functionality and employ them
> >>against nova's EC2.
> >>3. Write spec for required API to be exposed from nova so that we get
> >>full info.
> >>4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
> >>5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
> >>6. Communicate and discover all of the existing questions and
> >>problematic points for the switching from existing EC2 API to the new
> >>one. Provide solutions or decisions about them.
> >>7. Do performance testing of the new stackforge/ec2 and provide fixes if
> >>any bottlenecks come up.
> >>8. Have all of the above prepared for the Vancouver summit and discuss
> >>the situation there.
> >>
> >>Michael, I am still wondering, who's going to be responsible for timely
> >>reviews and approvals of the fixes and tests we're going to contribute
> >>to nova? So far this is the biggest risk. Is there anyway to allow some
> >>of us to participate in the process?
> >I am happy to volunteer to shephard these reviews. I'll try to keep an
> >eye on them, and if something is blocking please just ping me directly
> >on IRC in #openstack-nova or bring them forward to the weekly Nova meeting.
> >
> > -Sean
> >
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


pgp0JOfRz8arI.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Alexandre Levine


On 2/2/15 7:04 PM, Sean Dague wrote:

On 02/02/2015 07:01 AM, Alexandre Levine wrote:

Michael,

I'm rather new here, especially in regard to communication matters, so
I'd also be glad to understand how it's done and then I can drive it if
it's ok with everybody.
By saying EC2 sub team - who did you keep in mind? From my team 3
persons are involved.

 From the technical point of view the transition plan could look somewhat
like this (sequence can be different):

1. Triage EC2 bugs and fix showstoppers in nova's EC2.
2. Contribute Tempest tests for EC2 functionality and employ them
against nova's EC2.
3. Write spec for required API to be exposed from nova so that we get
full info.
4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
6. Communicate and discover all of the existing questions and
problematic points for the switching from existing EC2 API to the new
one. Provide solutions or decisions about them.
7. Do performance testing of the new stackforge/ec2 and provide fixes if
any bottlenecks come up.
8. Have all of the above prepared for the Vancouver summit and discuss
the situation there.

Michael, I am still wondering, who's going to be responsible for timely
reviews and approvals of the fixes and tests we're going to contribute
to nova? So far this is the biggest risk. Is there anyway to allow some
of us to participate in the process?

It would also be really helpful if there were reviews from you team on
any ec2 touching code.

https://review.openstack.org/#/q/file:%255Enova/api/ec2.*+status:open,n,z

There currently are only a few patches which touch ec2 that are ec2
function/bug related, and mostly don't have any scored reviews.
Especially this series -
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/ec2-volume-and-snapshot-tags,n,z


Which is a month old with no scoring.


Yes, we'll start looking there as well.


-Sean


Best regards,
   Alex Levine

On 2/2/15 2:46 AM, Michael Still wrote:

So, its exciting to me that we seem to developing more forward
momentum here. I personally think the way forward is a staged
transition from the in-nova EC2 API to the stackforge project, with
testing added to ensure that we are feature complete between the two.
I note that Soren disagrees with me here, but that's ok -- I'd like to
see us work through that as a team based on the merits.

So... It sounds like we have an EC2 sub team forming. How do we get
that group meeting to come up with a transition plan?

Michael

On Sun, Feb 1, 2015 at 4:12 AM, Davanum Srinivas 
wrote:

Alex,

Very cool. thanks.

-- dims

On Sat, Jan 31, 2015 at 1:04 AM, Alexandre Levine
 wrote:

Davanum,

Now that the picture with the both EC2 API solutions has cleared up
a bit, I
can say yes, we'll be adding the tempest tests and doing devstack
integration.

Best regards,
Alex Levine

On 1/31/15 2:21 AM, Davanum Srinivas wrote:

Alexandre, Randy,

Are there plans afoot to add support to switch on stackforge/ec2-api
in devstack? add tempest tests etc? CI Would go a long way in
alleviating concerns i think.

thanks,
dims

On Fri, Jan 30, 2015 at 1:24 PM, Bias, Randy 
wrote:

As you know we have been driving forward on the stack forge
project and
it¹s our intention to continue to support it over time, plus
reinvigorate
the GCE APIs when that makes sense. So we¹re supportive of
deprecating
from Nova to focus on EC2 API in Nova.  I also think it¹s good for
these
APIs to be able to iterate outside of the standard release cycle.



--Randy

VP, Technology, EMC Corporation
Formerly Founder & CEO, Cloudscaling (now a part of EMC)
+1 (415) 787-2253 [google voice]
TWITTER: twitter.com/randybias
LINKEDIN: linkedin.com/in/randybias
ASSISTANT: ren...@emc.com






On 1/29/15, 4:01 PM, "Michael Still"  wrote:


Hi,

as you might have read on openstack-dev, the Nova EC2 API
implementation is in a pretty sad state. I wont repeat all of those
details here -- you can read the thread on openstack-dev for detail.

However, we got here because no one is maintaining the code in Nova
for the EC2 API. This is despite repeated calls over the last 18
months (at least).

So, does the Foundation have a role here? The Nova team has
failed to
find someone to help us resolve these issues. Can the board perhaps
find resources as the representatives of some of the largest
contributors to OpenStack? Could the Foundation employ someone to
help
us our here?

I suspect the correct plan is to work on getting the stackforge
replacement finished, and ensuring that it is feature compatible
with
the Nova implementation. However, I don't want to preempt the design
process -- there might be other ways forward here.

I feel that a continued discussion which just repeats the last 18
months wont actually fix the situation -- its time to "break out" of
that mode and find other ways to try and get someone working on this
problem.

Thoughts welc

Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Alexandre Levine


On 2/2/15 7:39 PM, Sean Dague wrote:

On 02/02/2015 11:35 AM, Alexandre Levine wrote:

Thank you Sean.

We'll be tons of EC2 Tempest tests for your attention shortly.
How would you prefer them? In several reviews, I believe. Not in one,
right?

Best regards,
   Alex Levine

So, honestly, I think that we should probably look at getting the ec2
tests out of the Tempest tree as well and into a more dedicated place.
Like as part of the stackforge project tree. Given that the right
expertise would be there as well. It could use tempest-lib for some of
the common parts.

-Sean
We tried to find out about tempest-lib, asked Keichi Ohmichi, but it 
seems that's still work in progress. Can you point us somewhere where we 
can understand how to employ this technology.

So the use cases will be:

1. Be able to run the suit against EC2 in nova.
2. Be able to run the suit against stackforge/EC2.
3. Use that for gating for both repos.

Additional complication here is that some of the tests will have to 
skipped because of functionality absence or because of bugs in nova's 
EC2 but should be employed against stackforge's version.


Could you advice how to achieve such effects?




On 2/2/15 6:55 PM, Sean Dague wrote:

On 02/02/2015 07:01 AM, Alexandre Levine wrote:

Michael,

I'm rather new here, especially in regard to communication matters, so
I'd also be glad to understand how it's done and then I can drive it if
it's ok with everybody.
By saying EC2 sub team - who did you keep in mind? From my team 3
persons are involved.

  From the technical point of view the transition plan could look
somewhat
like this (sequence can be different):

1. Triage EC2 bugs and fix showstoppers in nova's EC2.
2. Contribute Tempest tests for EC2 functionality and employ them
against nova's EC2.
3. Write spec for required API to be exposed from nova so that we get
full info.
4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
6. Communicate and discover all of the existing questions and
problematic points for the switching from existing EC2 API to the new
one. Provide solutions or decisions about them.
7. Do performance testing of the new stackforge/ec2 and provide fixes if
any bottlenecks come up.
8. Have all of the above prepared for the Vancouver summit and discuss
the situation there.

Michael, I am still wondering, who's going to be responsible for timely
reviews and approvals of the fixes and tests we're going to contribute
to nova? So far this is the biggest risk. Is there anyway to allow some
of us to participate in the process?

I am happy to volunteer to shephard these reviews. I'll try to keep an
eye on them, and if something is blocking please just ping me directly
on IRC in #openstack-nova or bring them forward to the weekly Nova
meeting.

 -Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes/log - 02/02/2014

2015-02-02 Thread Renat Akhmerov
Thanks for joining us today for team meeting!

Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-02-02-16.00.html
 

 
Full log: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-02-02-16.00.log.html
 


The next meeting is scheduled on Feb 09

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Sahid Orentino Ferdjaoui
On Mon, Feb 02, 2015 at 11:51:47AM -0500, Jay Pipes wrote:
> This is a bug that I discovered when fixing some of the NUMA related nova
> objects. I have a patch that should fix it up shortly.

Never seen this issue, could be great to have a bug repported.

> This is what happens when we don't have any functional testing of stuff that
> is merged into master...
> Best,
> -jay

Thanks,
s.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Boris Pavlovic
On 02/02/2015 11:35 AM, Alexandre Levine wrote:
> Thank you Sean.
>
> We'll be tons of EC2 Tempest tests for your attention shortly.
> How would you prefer them? In several reviews, I believe. Not in one,
> right?
>
> Best regards,
>   Alex Levine

So, honestly, I think that we should probably look at getting the ec2
> tests out of the Tempest tree as well and into a more dedicated place.
> Like as part of the stackforge project tree. Given that the right
> expertise would be there as well. It could use tempest-lib for some of
> the common parts.



Rally team would be happy to accept some of tests, and as well we support
in tree plugins.
So part of tests (that are only for hardcore functional testing and not
reusable in reallife)
can stay in tree of ec2-api.

Best regards,
Boris Pavlovic


On Mon, Feb 2, 2015 at 7:39 PM, Sean Dague  wrote:

> On 02/02/2015 11:35 AM, Alexandre Levine wrote:
> > Thank you Sean.
> >
> > We'll be tons of EC2 Tempest tests for your attention shortly.
> > How would you prefer them? In several reviews, I believe. Not in one,
> > right?
> >
> > Best regards,
> >   Alex Levine
>
> So, honestly, I think that we should probably look at getting the ec2
> tests out of the Tempest tree as well and into a more dedicated place.
> Like as part of the stackforge project tree. Given that the right
> expertise would be there as well. It could use tempest-lib for some of
> the common parts.
>
> -Sean
>
> >
> > On 2/2/15 6:55 PM, Sean Dague wrote:
> >> On 02/02/2015 07:01 AM, Alexandre Levine wrote:
> >>> Michael,
> >>>
> >>> I'm rather new here, especially in regard to communication matters, so
> >>> I'd also be glad to understand how it's done and then I can drive it if
> >>> it's ok with everybody.
> >>> By saying EC2 sub team - who did you keep in mind? From my team 3
> >>> persons are involved.
> >>>
> >>>  From the technical point of view the transition plan could look
> >>> somewhat
> >>> like this (sequence can be different):
> >>>
> >>> 1. Triage EC2 bugs and fix showstoppers in nova's EC2.
> >>> 2. Contribute Tempest tests for EC2 functionality and employ them
> >>> against nova's EC2.
> >>> 3. Write spec for required API to be exposed from nova so that we get
> >>> full info.
> >>> 4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
> >>> 5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
> >>> 6. Communicate and discover all of the existing questions and
> >>> problematic points for the switching from existing EC2 API to the new
> >>> one. Provide solutions or decisions about them.
> >>> 7. Do performance testing of the new stackforge/ec2 and provide fixes
> if
> >>> any bottlenecks come up.
> >>> 8. Have all of the above prepared for the Vancouver summit and discuss
> >>> the situation there.
> >>>
> >>> Michael, I am still wondering, who's going to be responsible for timely
> >>> reviews and approvals of the fixes and tests we're going to contribute
> >>> to nova? So far this is the biggest risk. Is there anyway to allow some
> >>> of us to participate in the process?
> >> I am happy to volunteer to shephard these reviews. I'll try to keep an
> >> eye on them, and if something is blocking please just ping me directly
> >> on IRC in #openstack-nova or bring them forward to the weekly Nova
> >> meeting.
> >>
> >> -Sean
> >>
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Sahid Orentino Ferdjaoui
On Mon, Feb 02, 2015 at 10:44:09AM -0600, Chris Friesen wrote:
> Hi,
> 
> I'm trying to make use of huge pages as described in
> "http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html";.
> I'm running kilo as of Jan 27th.
> I've allocated 1 2MB pages on a compute node.  "virsh capabilities" on 
> that node contains:
> 
> 
>   
> 
>   67028244
>   16032069
>   5000
>   1
> ...
> 
>   67108864
>   16052224
>   5000
>   1
> 
> 
> I then restarted nova-compute, I set "hw:mem_page_size=large" on a
> flavor, and then tried to boot up an instance with that flavor.  I
> got the error logs below in nova-scheduler.  Is this a bug?

Hello,

Launchpad.net could be more appropriate to
discuss on something which looks like a bug.

  https://bugs.launchpad.net/nova/+filebug

According to your trace I would say you are running different versions
of Nova services.

BTW please verify your version of libvirt. Hugepages is supported
start to 1.2.8 (but this should difinitly not failed so badly like
that)

s.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-02-02 Thread Brian Haley
Kevin,

I think we are finally converging.  One of the points I've been trying to make
is that users are playing with fire when they start playing with some of these
port attributes, and given the tool we have to work with (DHCP), the
instantiation of these changes cannot be made seamlessly to a VM.  That's life
in the cloud, and most of these things can (and should) be designed around.

On 02/02/2015 06:48 AM, Kevin Benton wrote:
>> The only thing this discussion has convinced me of is that allowing users
> to change the fixed IP address on a neutron port leads to a bad
> user-experience.
> 
> Not as bad as having to delete a port and create another one on the same
> network just to change addresses though...
> 
>> Even with an 8-minute renew time you're talking up to a 7-minute blackout
> (87.5% of lease time before using broadcast).
> 
> I suggested 240 seconds renewal time, which is up to 4 minutes of
> connectivity outage. This doesn't have anything to do with lease time and
> unicast DHCP will work because the spoof rules allow DHCP client traffic
> before restricting to specific IPs.

The unicast DHCP will make it to the "wire", but if you've renumbered the subnet
either a) the DHCP server won't respond because it's IP has changed as well; or
b) the DHCP server won't respond because there is no mapping for the VM on it's
old subnet.

>> Most would have rebooted long before then, true?  Cattle not pets, right?
> 
> Only in an ideal world that I haven't encountered with customer deployments. 
> Many enterprise deployments end up bringing pets along where reboots aren't 
> always free. The time taken to relaunch programs and restore state can end
> up being 10 minutes+ if it's something like a VDI deployment or dev
> environment where someone spends a lot of time working on one VM.

This would happen if the AZ their VM was in went offline as well, at which point
they would change their design to be more cloud-aware than it was.  Let's not
heap all the blame on neutron - the user is tasked with vetting that their
decisions meet the requirements they desire by thoroughly testing it.

>> Changing the lease time is just papering-over the real bug - neutron
> doesn't support seamless changes in IP addresses on ports, since it totally 
> relies on the dhcp configuration settings a deployer has chosen.
> 
> It doesn't need to be seamless, but it certainly shouldn't be useless. 
> Connectivity interruptions can be expected with IP changes (e.g. I've seen 
> changes in elastic IPs on EC2 can interrupt connectivity to an instance for
> up to 2 minutes), but an entire day of downtime is awful.

Yes, I agree, an entire day of downtime is bad.

> One of the things I'm getting at is that a deployer shouldn't be choosing
> such high lease times and we are encouraging it with a high default. You are
> arguing for infrequent renewals to work around excessive logging, which is
> just an implementation problem that should be addressed with a patch to your
> logging collector (de-duplication) or to dnsmasq (don't log renewals).

My #1 deployment problem was around control-plane upgrade, not logging:

"During a control-plane upgrade or outage, having a short DHCP lease time will
take all your VMs offline.  The old value of 2 minutes is not a realistic value
for an upgrade, and I don't think 8 minutes is much better.  Yes, when DHCP is
down you can't boot a new VM, but as long as customers can get to their existing
VMs they're pretty happy and won't scream bloody murder."

>> Documenting a VM reboot is necessary, or even deprecating this (you won't
>> like
> that) are sounding better to me by the minute.
> 
> If this is an approach you really want to go with, then we should at least
> be consistent and deprecate the extra dhcp options extension (or at least
> the ability to update ports' dhcp options). Updating subnet attributes like 
> gateway_ip, dns_nameserves, and host_routes should be thrown out as well. All
> of these things depend on the DHCP server to deliver updated information and
> are hindered by renewal times. Why discriminate against IP updates on a port?
> A failure to receive many of those other types of changes could result in
> just as severe of a connection disruption.

How about a big (*) next to all the things that could cause issues?  :)  We've
completely "loaded the gun" exposing all these attributes to the general user
when only the network-aware power-user should be playing with them.

(*) Changing these attributes could cause VMs to become unresponsive for a long
period of time depending on the deployment settings, and should be used with
caution.  Sometimes a VM reboot will be required to re-gain connectivity.

> In summary, the information the DHCP server gives to clients is not static. 
> Unless we eliminate updates to everything in the Neutron API that results in 
> different DHCP lease information, my suggestion is that we include a new
> option for the renewal interval and have the default se

Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Jay Pipes
This is a bug that I discovered when fixing some of the NUMA related 
nova objects. I have a patch that should fix it up shortly.


This is what happens when we don't have any functional testing of stuff 
that is merged into master...


Best,
-jay

On 02/02/2015 11:44 AM, Chris Friesen wrote:

Hi,

I'm trying to make use of huge pages as described in 
"http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html";.
  I'm running kilo as of Jan 27th.

I've allocated 1 2MB pages on a compute node.  "virsh capabilities" on that 
node contains:

 
   
 
   67028244
   16032069
   5000
   1
...
 
   67108864
   16052224
   5000
   1


I then restarted nova-compute, I set "hw:mem_page_size=large" on a flavor, and 
then tried to boot up an instance with that flavor.  I got the error logs below in 
nova-scheduler.  Is this a bug?


Feb  2 16:23:10 controller-0 nova-scheduler Exception during message handling: 
Cannot load 'mempages' in the base class
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/server.py", line 139, in 
inner
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
func(*args, **kwargs)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/manager.py", line 86, in 
select_destinations
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 
67, in select_destinations
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 
138, in _schedule
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties, index=num)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/host_manager.py", line 391, 
in get_filtered_hosts
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher hosts, 
filter_properties, index)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/filters.py", line 77, in 
get_filtered_objects
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher list_objs 
= list(objs)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/filters.py", line 43, in filter_all
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher if 
self._filter_one(obj, filter_properties):
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filters/__init__.py", line 
27, in _filter_one
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
self.host_passes(obj, filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filters/numa_topology_filter.py",
 line 45, in host_passes
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
limits_topology=limits))
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/virt/hardware.py", line 1161, in 
numa_fit_instance_to_host
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
host_cell, instance_cell, limit_cell)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/virt/hardware.py", line 851, in 
_numa_fit_instance_cell
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.

Re: [openstack-dev] [Heat][Keystone] Native keystone resources in Heat

2015-02-02 Thread Adam Young

On 01/30/2015 02:19 AM, Thomas Spatzier wrote:

From: Zane Bitter 
To: openstack Development Mailing List



Date: 29/01/2015 17:47
Subject: [openstack-dev] [Heat][Keystone] Native keystone resources in

Heat

I got a question today about creating keystone users/roles/tenants in
Heat templates. We currently support creating users via the
AWS::IAM::User resource, but we don't have a native equivalent.

IIUC keystone now allows you to add users to a domain that is otherwise
backed by a read-only backend (i.e. LDAP). If this means that it's now
possible to configure a cloud so that one need not be an admin to create
users then I think it would be a really useful thing to expose in Heat.
Does anyone know if that's the case?

I think roles and tenants are likely to remain admin-only, but we have
precedent for including resources like that in /contrib... this seems
like it would be comparably useful.

Thoughts?

I am really not a keystone expert,

I am!  But when I grow up, I want to be a fireman!

so don't know what the security
implications would be, but I have heard the requirement or wish to be able
to create users, roles etc. from a template many times.
SHould be possible.  LDAP can be read only, but these things can all go 
into SQL, and just have a loose coupling with the LDAP entities.




I've talked to
people who want to explore this for onboarding use cases, e.g. for
onboarding of lines of business in a company, or for onboarding customers
in a public cloud case. They would like to be able to have templates that
lay out the overall structure for authentication stuff, and then
parameterize it for each onboarding process.


THose domains, users, projects ,etc would all go intop SQL.  THe only 
case ot use LDAP would be if their remote organization already had an 
LDAP system that contained users, and the4y wanted to reuse it.  There 
are issues, there, and I suspect Federation (SAML) will be the mechanism 
of choice for these types of integrations, not LDAP.



If this is something to be enabled, that would be interesting to explore.

Regards,
Thomas


cheers,
Zane.



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Chris Friesen
Hi,

I'm trying to make use of huge pages as described in 
"http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html";.
  I'm running kilo as of Jan 27th.

I've allocated 1 2MB pages on a compute node.  "virsh capabilities" on that 
node contains:


  

  67028244
  16032069
  5000
  1
...

  67108864
  16052224
  5000
  1


I then restarted nova-compute, I set "hw:mem_page_size=large" on a flavor, and 
then tried to boot up an instance with that flavor.  I got the error logs below 
in nova-scheduler.  Is this a bug?


Feb  2 16:23:10 controller-0 nova-scheduler Exception during message handling: 
Cannot load 'mempages' in the base class
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/server.py", line 139, in 
inner
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
func(*args, **kwargs)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/manager.py", line 86, in 
select_destinations
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 
67, in select_destinations
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 
138, in _schedule
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties, index=num)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/host_manager.py", line 391, 
in get_filtered_hosts
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher hosts, 
filter_properties, index)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/filters.py", line 77, in 
get_filtered_objects
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher list_objs 
= list(objs)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/filters.py", line 43, in filter_all
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher if 
self._filter_one(obj, filter_properties):
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filters/__init__.py", line 
27, in _filter_one
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
self.host_passes(obj, filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filters/numa_topology_filter.py",
 line 45, in host_passes
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
limits_topology=limits))
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/virt/hardware.py", line 1161, in 
numa_fit_instance_to_host
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
host_cell, instance_cell, limit_cell)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/virt/hardware.py", line 851, in 
_numa_fit_instance_cell
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
host_cell, instance_cell)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/virt/hardware.py", line 692, in 
_numa_cell_supports_pagesize_request
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
avail_pag

Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-02-02 Thread Adam Young

On 01/29/2015 03:11 PM, Mike Bayer wrote:


Morgan Fainberg  wrote:


Are downward migrations really a good idea for us to support? Is this downward 
migration path a sane expectation? In the real world, would any one really 
trust the data after migrating downwards?

It’s a good idea for a migration script to include a rudimentary downgrade 
operation to complement the upgrade operation, if feasible.  The purpose of 
this downgrade is from




Except that is it is code we need to maintain and support.  I think we 
are making more work for ourselves than the value these scripts provide 
justify.

  a practical standpoint helpful when locally testing a specific, typically 
small series of migrations.

A downgrade however typically only applies to schema objects, and not so much 
data.   It is often impossible to provide downgrades of data changes as it is 
likely that a data upgrade operation was destructive of some data.  Therefore, 
when dealing with a full series of real world migrations that include data 
migrations within them, downgrades are typically impossible.   I’m getting the 
impression that our migration scripts have data migrations galore in them.

So I am +1 on establishing a policy that the deployer of the application would 
not have access to any “downgrade” migrations, and -1 on removing “downgrade” 
entirely from individual migrations.   Specific migration scripts may return 
NotImplemented for their downgrade if its really not feasible, but for things 
like table and column changes where autogenerate has already rendered the 
downgrade, it’s handy to keep at least the smaller ones working.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-02-02 Thread Evgeniy L
Hi Dmitry,

I've read about inventories and I'm not sure if it's what we really need,
inventory provides you a way to have some kind of nodes discovery
mechanism, but what we need is to get some abstract data and convert
the data to more tasks friendly format.

In another thread I've mentioned Variables [1] in Ansible, probably it
fits more than inventory from architecture point of view.

With this functionality plugin will be able to get required information from
Nailgun via REST API and pass the information into specific task.

But it's not a way to go with the core deployment. I would like to remind
you what we had two years ago, we had Nailgun which passed the information
in format A to Orchestrator (Astute), than Orchestrator converted the
information
in second format B. It was horrible from debugging point of view, it's
always
hard when you have to go in several places to figure out what you get
as result. Your have pretty similar design suggestion, which is dividing
searilization logic between Nailgun and some another layer in tasks
scripts.

Thanks,

[1] http://docs.ansible.com/playbooks_variables.html#registered-variables

On Mon, Feb 2, 2015 at 5:05 PM, Dmitriy Shulyak 
wrote:

>
> >> But why to add another interface when there is one already (rest api)?
>>
>> I'm ok if we decide to use REST API, but of course there is a problem
>> which
>> we should solve, like versioning, which is much harder to support, than
>> versioning
>> in core-serializers. Also do you have any ideas how it can be implemented?
>>
>
> We need to think about deployment serializers not as part of nailgun (fuel
> data inventory), but - part of another layer which uses nailgun api to
> generate deployment information. Lets take ansible for example, and
> dynamic inventory feature [1].
> Nailgun API can be used inside of ansible dynamic inventory to generate
> config that will be consumed by ansible during deployment.
>
> Such approach will have several benefits:
> - cleaner interface (ability to use ansible as main interface to control
> deployment and all its features)
> - deployment configuration will be tightly coupled with deployment code
> - no limitation on what sources to use for configuration, and how to
> compute additional values from requested data
>
> I want to emphasize that i am not considering ansible as solution for
> fuel, it serves only as example of architecture.
>
>
>> You run some code which get the information from api on the master node
>> and
>> then sets the information in tasks? Or you are going to run this code on
>> OpenStack
>> nodes? As you mentioned in case of tokens, you should get the token right
>> before
>> you really need it, because of expiring problem, but in this case you
>> don't
>> need any serializers, get required token right in the task.
>>
>
> I think all information should be fetched before deployment.
>
>>
>>
> >> What is your opinion about serializing additional information in
>> plugins code? How it can be done, without exposing db schema?
>>
>> With exposing the data in more abstract way the way it's done right now
>> for the current deployment logic.
>>
>
> I mean what if plugin will want to generate additional data, like -
> https://review.openstack.org/#/c/150782/? Schema will be still exposed?
>
> [1] http://docs.ansible.com/intro_dynamic_inventory.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Sean Dague
On 02/02/2015 11:35 AM, Alexandre Levine wrote:
> Thank you Sean.
> 
> We'll be tons of EC2 Tempest tests for your attention shortly.
> How would you prefer them? In several reviews, I believe. Not in one,
> right?
> 
> Best regards,
>   Alex Levine

So, honestly, I think that we should probably look at getting the ec2
tests out of the Tempest tree as well and into a more dedicated place.
Like as part of the stackforge project tree. Given that the right
expertise would be there as well. It could use tempest-lib for some of
the common parts.

-Sean

> 
> On 2/2/15 6:55 PM, Sean Dague wrote:
>> On 02/02/2015 07:01 AM, Alexandre Levine wrote:
>>> Michael,
>>>
>>> I'm rather new here, especially in regard to communication matters, so
>>> I'd also be glad to understand how it's done and then I can drive it if
>>> it's ok with everybody.
>>> By saying EC2 sub team - who did you keep in mind? From my team 3
>>> persons are involved.
>>>
>>>  From the technical point of view the transition plan could look
>>> somewhat
>>> like this (sequence can be different):
>>>
>>> 1. Triage EC2 bugs and fix showstoppers in nova's EC2.
>>> 2. Contribute Tempest tests for EC2 functionality and employ them
>>> against nova's EC2.
>>> 3. Write spec for required API to be exposed from nova so that we get
>>> full info.
>>> 4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
>>> 5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
>>> 6. Communicate and discover all of the existing questions and
>>> problematic points for the switching from existing EC2 API to the new
>>> one. Provide solutions or decisions about them.
>>> 7. Do performance testing of the new stackforge/ec2 and provide fixes if
>>> any bottlenecks come up.
>>> 8. Have all of the above prepared for the Vancouver summit and discuss
>>> the situation there.
>>>
>>> Michael, I am still wondering, who's going to be responsible for timely
>>> reviews and approvals of the fixes and tests we're going to contribute
>>> to nova? So far this is the biggest risk. Is there anyway to allow some
>>> of us to participate in the process?
>> I am happy to volunteer to shephard these reviews. I'll try to keep an
>> eye on them, and if something is blocking please just ping me directly
>> on IRC in #openstack-nova or bring them forward to the weekly Nova
>> meeting.
>>
>> -Sean
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-02-02 Thread Adam Young

On 01/30/2015 07:23 AM, Sandy Walsh wrote:


From: Johannes Erdfelt [johan...@erdfelt.com]
Sent: Thursday, January 29, 2015 9:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

On Thu, Jan 29, 2015, Morgan Fainberg  wrote:

The concept that there is a utility that can (and in many cases
willfully) cause permanent, and in some cases irrevocable, data loss
from a simple command line interface sounds crazy when I try and
explain it to someone.

The more I work with the data stored in SQL, and the more I think we
should really recommend the tried-and-true best practices when trying
to revert from a migration: Restore your DB to a known good state.

You mean like restoring from backup?

Unless your code deploy fails before it has any chance of running, then
you could have had new instances started or instances changed and then
restoring from backups would lose data.

If you meant another way of restoring your data, then there are
some strategies that downgrades could employ that doesn't lose data,
but there is nothing that can handle 100% of cases.

All of that said, for the Rackspace Public Cloud, we have never rolled
back our deploy. We have always rolled forward for any fixes we needed.


>From my perspective, I'd be fine with doing away with downgrades, but

I'm not sure how to document that deployers should roll forward if they
have any deploy problems.

JE

Yep ... downgrades simply aren't practical with a SQL-schema based
solution. Too coarse-grained.

We'd have to move to a schema-less model, per-record versioning and
up-down conversion at the Nova Objects layer. Or, possibly introduce
more nodes that can deal with older versions. Either way, that's a big
hairy change


Horse pocky!  Schema less means "implied contract instead of implicit."  
That would be madness.  Please take the NoSQL good, SQL bad approach of 
of the conversation, as absotutely (yes, absotutely) everything we have 
here is doubly true for NoSQL, we just don't hammer on it as much.  We 
don't even document the record formats in the NoSQL cases in Keystone so 
we can break them both willy and nilly, but have often found that we are 
just stuck.  Usually, we are only dealing with the token table, and so 
we just dump the old tokens and shake our heads sadly.






.

The upgrade code is still required, so removing the downgrades (and
tests, if any) is a relatively small change to the code base.

The bigger issue is the anxiety the deployer will experience until a
patch lands.

-S

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with instance consoles and novnc

2015-02-02 Thread Chris Friesen

On 01/30/2015 06:26 AM, Jesse Pretorius wrote:

On 29 January 2015 at 04:57, Chris Friesen mailto:chris.frie...@windriver.com>> wrote:

On 01/28/2015 10:33 PM, Mathieu Gagné wrote:

On 2015-01-28 11:13 PM, Chris Friesen wrote:

Anyone have any suggestions on where to start digging?

We have a similar issue which has yet to be properly diagnosed on our 
side.

One workaround which looks to be working for us is enabling the "private
mode"
in the browser. If it doesn't work, try deleting your cookies.

Can you see if those workarounds work for you?


Neither of those seems to work for me.  I still get a multi-second delay and
then the red bar with "Connect timeout".

I suspect it's something related to websockify, but I can't figure out what.


In some versions of websockify and the relates noVNC versions that use it I've
seen the same behaviour. This is due to the way websockify tries to detect the
protocol to use. It ends up doing a localhost connection and the browser rejects
it as an unsafe operation.

It was fixed in later versions of websockify.

Have you tried manually updating the NoVNC and websockify files to later
versions from source?


We were already using a fairly recent version of websockify, but it turns out 
that we needed to upversion the novnc package.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Alexandre Levine

Thank you Sean.

We'll be tons of EC2 Tempest tests for your attention shortly.
How would you prefer them? In several reviews, I believe. Not in one, right?

Best regards,
  Alex Levine

On 2/2/15 6:55 PM, Sean Dague wrote:

On 02/02/2015 07:01 AM, Alexandre Levine wrote:

Michael,

I'm rather new here, especially in regard to communication matters, so
I'd also be glad to understand how it's done and then I can drive it if
it's ok with everybody.
By saying EC2 sub team - who did you keep in mind? From my team 3
persons are involved.

 From the technical point of view the transition plan could look somewhat
like this (sequence can be different):

1. Triage EC2 bugs and fix showstoppers in nova's EC2.
2. Contribute Tempest tests for EC2 functionality and employ them
against nova's EC2.
3. Write spec for required API to be exposed from nova so that we get
full info.
4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
6. Communicate and discover all of the existing questions and
problematic points for the switching from existing EC2 API to the new
one. Provide solutions or decisions about them.
7. Do performance testing of the new stackforge/ec2 and provide fixes if
any bottlenecks come up.
8. Have all of the above prepared for the Vancouver summit and discuss
the situation there.

Michael, I am still wondering, who's going to be responsible for timely
reviews and approvals of the fixes and tests we're going to contribute
to nova? So far this is the biggest risk. Is there anyway to allow some
of us to participate in the process?

I am happy to volunteer to shephard these reviews. I'll try to keep an
eye on them, and if something is blocking please just ping me directly
on IRC in #openstack-nova or bring them forward to the weekly Nova meeting.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Definition Formats

2015-02-02 Thread michael mccune

On 02/02/2015 10:26 AM, Chris Dent wrote:

pecan-swagger looks cool but presumably pecan has most of the info
you're putting in the decorators in itself already? So, given an
undecorated pecan app, would it be possible to provide it to a function
and have that function output all the paths?



you are correct, pecan is storing most of the information we want in 
it's controller metadata. i am working on the next version of 
pecan-swagger now that will reduce the need for so many decorators, and 
instead pull the endpoint information out of the pecan based controller 
classes.


in terms of having a completely undecorated pecan app, i'm not sure 
that's possible just yet due to the object-dispatch routing used by 
pecan. in the next version of pecan-swagger i'm going to reduce the 
deocrators to only be needed on controller classes, but i'm not sure 
that it will be possible to reduce further as there will need to be some 
way to learn the route path hierarchy.


i suppose in the future it might be advantageous to create a pecan 
controller base class that could help inform the routing structure, but 
this would still need to be added to current pecan projects.



mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-02-02 Thread Sean Dague
On 02/02/2015 10:55 AM, Daniel P. Berrange wrote:
> On Mon, Feb 02, 2015 at 07:44:24AM -0800, Dan Smith wrote:
>>> I'm with Daniel on that one. We shouldn't "deprecate" until we are 100%
>>> sure that the replacement is up to the task and that strategy is solid.
>>
>> My problem with this is: If there wasn't a stackforge project, what
>> would we do? Nova's in-tree EC2 support has been rotting for years now,
>> and despite several rallies for developers, no real progress has been
>> made to rescue it. I don't think that it's reasonable to say that if
>> there wasn't a stackforge project we'd just have to suck it up and
>> magically produce the developers to work on EC2; it's clear that's not
>> going to happen.
> 
> I think that is exactly what we'd would have todo. We exist as a project
> to serve the needs of our users and it seems pretty clear from the survey
> results that users are deploying the EC2 impl in significant numbers,
> so to just remove it would essentially be ignoring what our users want
> from the project. If we're saying it is reasonable to ignore what our
> users want, then this project is frankly doomed.
> 
>> Thus, it seems to me that we need to communicate that our EC2 support is
>> going away. Hopefully the stackforge project will be at a point to
>> support users that want to keep the functionality. However, the fate of
>> our in-tree support seems clear regardless of how that turns out.
> 
> If the external EC2 support doesn't work out for whatever reason, then
> I don't think the fate of the in-tree support is at all clear. I think
> it would have a very strong case for continuing to exist.

It's really easy to say "someone should do this", but the problem is
that none of the core team is interested, neither is anyone else. Most
of the people that once were interested have left being active in OpenStack.

EC2 compatibility does not appear to be part of the long term strategy
for the project, hasn't been in a while (looking at the level of
maintenance here). Ok, we should signal that so that new and existing
users that believe that is a core supported feature realize it's not.

The fact that there is some plan to exist out of tree is a bonus,
however the fact that this is not a first class feature in Nova really
does need to be signaled. It hasn't been.

Maybe deprecation is the wrong tool for that, and marking EC2 as
experimental and non supported in the log message is more appropriate.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Daniel P. Berrange
On Mon, Feb 02, 2015 at 11:19:45AM -0500, Andrew Laski wrote:
> 
> On 02/02/2015 05:58 AM, Daniel P. Berrange wrote:
> >On Sun, Feb 01, 2015 at 11:20:08AM -0800, Noel Burton-Krahn wrote:
> >>Thanks for bringing this up, Daniel.  I don't think it makes sense to have
> >>a timeout on live migration, but operators should be able to cancel it,
> >>just like any other unbounded long-running process.  For example, there's
> >>no timeout on file transfers, but they need an interface report progress
> >>and to cancel them.  That would imply an option to cancel evacuation too.
> >There has been periodic talk about a generic "tasks API" in Nova for managing
> >long running operations and getting information about their progress, but I
> >am not sure what the status of that is. It would obviously be applicable to
> >migration if that's a route we took.
> 
> Currently the status of a tasks API is that it would happen after the API
> v2.1 microversions work has created a suitable framework in which to add
> tasks to the API.

So is all work on tasks blocked by the microversions support ? I would have
though that would only block places where we need to modify existing APIs.
Are we not able to add APIs for listing / cancelling tasks as new APIs
without such a dependency on microversions ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Andrew Laski


On 02/02/2015 05:58 AM, Daniel P. Berrange wrote:

On Sun, Feb 01, 2015 at 11:20:08AM -0800, Noel Burton-Krahn wrote:

Thanks for bringing this up, Daniel.  I don't think it makes sense to have
a timeout on live migration, but operators should be able to cancel it,
just like any other unbounded long-running process.  For example, there's
no timeout on file transfers, but they need an interface report progress
and to cancel them.  That would imply an option to cancel evacuation too.

There has been periodic talk about a generic "tasks API" in Nova for managing
long running operations and getting information about their progress, but I
am not sure what the status of that is. It would obviously be applicable to
migration if that's a route we took.


Currently the status of a tasks API is that it would happen after the 
API v2.1 microversions work has created a suitable framework in which to 
add tasks to the API.




Regards,
Daniel



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >