Re: [openstack-dev] [TripleO] reviewer update march [additional cores]

2014-04-08 Thread Tomas Sedovic
On 08/04/14 01:50, Robert Collins wrote:
> tl;dr: 3 more core members to propose:
> bnemec
> greghaynes
> jdon

-1, there's a typo in jdob's nick ;-)

In all seriousness, I support all of them being added to core.

> 
> 
> On 4 April 2014 08:55, Chris Jones  wrote:
>> Hi
>>
>> +1 for your proposed -core changes.
>>
>> Re your question about whether we should retroactively apply the 3-a-day
>> rule to the 3 month review stats, my suggestion would be a qualified no.
>>
>> I think we've established an agile approach to the member list of -core, so
>> if there are a one or two people who we would have added to -core before the
>> goalposts moved, I'd say look at their review quality. If they're showing
>> the right stuff, let's get them in and helping. If they don't feel our new
>> goalposts are achievable with their workload, they'll fall out again
>> naturally before long.
> 
> So I've actioned the prior vote.
> 
> I said: "Bnemec, jdob, greg etc - good stuff, I value your reviews
> already, but..."
> 
> So... looking at a few things - long period of reviews:
> 60 days:
> |greghaynes   | 1210  22  99   0   081.8% |
> 14 ( 11.6%)  |
> |  bnemec | 1160  38  78   0   067.2% |
> 10 (  8.6%)  |
> |   jdob  |  870  15  72   0   082.8% |
> 4 (  4.6%)  |
> 
> 90 days:
> 
> |  bnemec | 1450  40 105   0   072.4% |
> 17 ( 11.7%)  |
> |greghaynes   | 1420  23 119   0   083.8% |
> 22 ( 15.5%)  |
> |   jdob  | 1060  17  89   0   084.0% |
> 7 (  6.6%)  |
> 
> Ben's reviews are thorough, he reviews across all contributors, he
> shows good depth of knowledge and awareness across tripleo, and is
> sensitive to the pragmatic balance between 'right' and 'good enough'.
> I'm delighted to support him for core now.
> 
> Greg is very active, reviewing across all contributors with pretty
> good knowledge and awareness. I'd like to see a little more contextual
> awareness though - theres a few (but not many) reviews where looking
> at how the big picture of things fitting together more would have been
> beneficial. *however*, I think that's a room-to-improve issue vs
> not-good-enough-for-core - to me it makes sense to propose him for
> core too.
> 
> Jay's reviews are also very good and consistent, somewhere between
> Greg and Ben in terms of bigger-context awareness - so another
> definite +1 from me.
> 
> -Rob
> 
> 
> 
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Agenda for tomorrow - please add topics

2014-04-08 Thread Xuhan Peng
Sean,

I've added Salvatore's code review of "Hide ipv6 subnet API attributes" to
our discussion list.


https://review.openstack.org/#/c/85869/

Xuhan


On Tue, Apr 8, 2014 at 4:49 AM, Collins, Sean <
sean_colli...@cable.comcast.com> wrote:

> Hi,
>
> I've added a section for tomorrow's agenda, please do add topics that
> you'd like to discuss.
>
>
> https://wiki.openstack.org/wiki/Meetings/Neutron-IPv6-Subteam#Agenda_for_April_8th
>
>
> --
> Sean M. Collins
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] use of the "oslo" namespace package

2014-04-08 Thread Mark McLoughlin
On Mon, 2014-04-07 at 15:24 -0400, Doug Hellmann wrote:
> We can avoid adding to the problem by putting each new library in its
> own package. We still want the Oslo name attached for libraries that
> are really only meant to be used by OpenStack projects, and so we need
> a naming convention. I'm not entirely happy with the "crammed
> together" approach for oslotest and oslosphinx. At one point Dims and
> I talked about using a prefix "oslo_" instead of just "oslo", so we
> would have "oslo_db", "oslo_i18n", etc. That's also a bit ugly,
> though. Opinions?

Uggh :)

> Given the number of problems we have now (I help about 1 dev per week
> unbreak their system),

I've seen you do this - kudos on your patience.

>  I think we should also consider renaming the
> existing libraries to not use the namespace package. That isn't a
> trivial change, since it will mean updating every consumer as well as
> the packaging done by distros. If we do decide to move them, I will
> need someone to help put together a migration plan. Does anyone want
> to volunteer to work on that?

One thing to note for any migration plan on this - we should use a new
pip package name for the new version so people with e.g.

   oslo.config>=1.2.0

don't automatically get updated to a version which has the code in a
different place. You should need to change to e.g.

  osloconfig>=1.4.0

> Before we make any changes, it would be good to know how bad this
> problem still is. Do developers still see issues on clean systems, or
> are all of the problems related to updating devstack boxes? Are people
> figuring out how to fix or work around the situation on their own? Can
> we make devstack more aggressive about deleting oslo libraries before
> re-installing them? Are there other changes we can make that would be
> less invasive?

I don't have any great insight, but hope we can figure something out.
It's crazy to think that even though namespace packages appear to work
pretty well initially, it might end up being so unworkable we would need
to switch.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] use of the "oslo" namespace package

2014-04-08 Thread Donald Stufft

On Apr 8, 2014, at 3:28 AM, Mark McLoughlin  wrote:

> On Mon, 2014-04-07 at 15:24 -0400, Doug Hellmann wrote:
>> We can avoid adding to the problem by putting each new library in its
>> own package. We still want the Oslo name attached for libraries that
>> are really only meant to be used by OpenStack projects, and so we need
>> a naming convention. I'm not entirely happy with the "crammed
>> together" approach for oslotest and oslosphinx. At one point Dims and
>> I talked about using a prefix "oslo_" instead of just "oslo", so we
>> would have "oslo_db", "oslo_i18n", etc. That's also a bit ugly,
>> though. Opinions?
> 
> Uggh :)
> 
>> Given the number of problems we have now (I help about 1 dev per week
>> unbreak their system),
> 
> I've seen you do this - kudos on your patience.
> 
>> I think we should also consider renaming the
>> existing libraries to not use the namespace package. That isn't a
>> trivial change, since it will mean updating every consumer as well as
>> the packaging done by distros. If we do decide to move them, I will
>> need someone to help put together a migration plan. Does anyone want
>> to volunteer to work on that?
> 
> One thing to note for any migration plan on this - we should use a new
> pip package name for the new version so people with e.g.
> 
>   oslo.config>=1.2.0
> 
> don't automatically get updated to a version which has the code in a
> different place. You should need to change to e.g.
> 
>  osloconfig>=1.4.0
> 
>> Before we make any changes, it would be good to know how bad this
>> problem still is. Do developers still see issues on clean systems, or
>> are all of the problems related to updating devstack boxes? Are people
>> figuring out how to fix or work around the situation on their own? Can
>> we make devstack more aggressive about deleting oslo libraries before
>> re-installing them? Are there other changes we can make that would be
>> less invasive?
> 
> I don't have any great insight, but hope we can figure something out.
> It's crazy to think that even though namespace packages appear to work
> pretty well initially, it might end up being so unworkable we would need
> to switch.
> 
> Mark.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Primarily this is because there are 3ish ways for a package to be installed, and
two methods of namespace packages (under the hood). However there is no
one single way to install a namespace package that works for all 3ish ways
to install a package.

Relevant: https://github.com/pypa/pip/issues/3

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] the question of security group rule

2014-04-08 Thread shihanzhang
Howdy Stackers!


There is a security group problem has been bothering me, but I do not know 
whether is appropriate to consult in there! For a security group rule, it will 
convert to iptable rules in compute node, but a iptable rule '-m state --state 
RELATED,ESTABLISHED -j RETURN' confuse me, according to my understanding this 
rule is to improve the performance of the security group by filteing the first 
package, there are other reasons? 
I hava a use-case: create a securiy group with few securiy group rule, then 
gradually increase the amount of security group rules based on business, if a 
VM in this security group also have connection, the new rules will not take 
effect, how could I deal with such scenarios?___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Supporting upstream RAs

2014-04-08 Thread Da Zhao Y Yu
Hi Sean,

That's OK for me, thanks for your work.


Thanks & Best Regards
Yu Da Zhao(于大钊)
--
Cloud Solutions & OpenStack Development
China Systems & Technology Laboratory in Beijing
Email: d...@cn.ibm.com
Tel:   (86)10-82450677
--
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march [additional cores]

2014-04-08 Thread Ladislav Smola

+1 for the -core changes

jdon sounds like a pretty cool Mafia name, +1 for Don Jay


On 04/08/2014 09:10 AM, Tomas Sedovic wrote:

On 08/04/14 01:50, Robert Collins wrote:

tl;dr: 3 more core members to propose:
bnemec
greghaynes
jdon

-1, there's a typo in jdob's nick ;-)

In all seriousness, I support all of them being added to core.



On 4 April 2014 08:55, Chris Jones  wrote:

Hi

+1 for your proposed -core changes.

Re your question about whether we should retroactively apply the 3-a-day
rule to the 3 month review stats, my suggestion would be a qualified no.

I think we've established an agile approach to the member list of -core, so
if there are a one or two people who we would have added to -core before the
goalposts moved, I'd say look at their review quality. If they're showing
the right stuff, let's get them in and helping. If they don't feel our new
goalposts are achievable with their workload, they'll fall out again
naturally before long.

So I've actioned the prior vote.

I said: "Bnemec, jdob, greg etc - good stuff, I value your reviews
already, but..."

So... looking at a few things - long period of reviews:
60 days:
|greghaynes   | 1210  22  99   0   081.8% |
14 ( 11.6%)  |
|  bnemec | 1160  38  78   0   067.2% |
10 (  8.6%)  |
|   jdob  |  870  15  72   0   082.8% |
4 (  4.6%)  |

90 days:

|  bnemec | 1450  40 105   0   072.4% |
17 ( 11.7%)  |
|greghaynes   | 1420  23 119   0   083.8% |
22 ( 15.5%)  |
|   jdob  | 1060  17  89   0   084.0% |
7 (  6.6%)  |

Ben's reviews are thorough, he reviews across all contributors, he
shows good depth of knowledge and awareness across tripleo, and is
sensitive to the pragmatic balance between 'right' and 'good enough'.
I'm delighted to support him for core now.

Greg is very active, reviewing across all contributors with pretty
good knowledge and awareness. I'd like to see a little more contextual
awareness though - theres a few (but not many) reviews where looking
at how the big picture of things fitting together more would have been
beneficial. *however*, I think that's a room-to-improve issue vs
not-good-enough-for-core - to me it makes sense to propose him for
core too.

Jay's reviews are also very good and consistent, somewhere between
Greg and Ben in terms of bigger-context awareness - so another
definite +1 from me.

-Rob






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] DHCP address being SNAT by L3 agent

2014-04-08 Thread Xuhan Peng
Hi Neutron stackers,

I have a question about how to fix the problem of DHCP port address being
SNAT by L3 agent.

I have my neutron DHCP agent and L3 agent running on the same network node,
and I disabled namespace usage in both agent configuration. I have one
router created with one external network and one internal network attached.

After enable the security group settings, I found that VMs on the compute
node cannot get DHCP message from dnsmasq on DHCP port of network node.

After future investigation by tcpdump the package from network node DHCP
port, I figured the source IP in the DHCP message sending from DHCP port
has been SNAT'ed into the external gateway IP address by L3 agent.
Therefore, the security group rule to allow DHCP sending from internal DHCP
address doesn't work anymore.

Chain neutron-vpn-agen-snat (1 references)
target prot opt source   destination
neutron-vpn-agen-float-snat  all  --  anywhere
anywhere
SNAT   all  --  10.1.1.0/24  anywhere
to:192.168.1.113

DHCP port address 10.1.1.2 is in the cidr of source IP being SNAT'ed. This
only happens when DHCP agent and L3 agent is on the same node and they both
have namespace disabled.


To fix this, I think we can either:

1. Add a return rule before the SNAT rule for DHCP port so the SNAT won't
be applied for DHCP port.

2. break the source cidr of the SNAT rule into IP ranges to exclude DHCP
address.

What's your opinion on this?

Xuhan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-08 Thread Ladislav Smola

+1 to this:

nova:
   config:
   default.compute_manager: 
ironic.nova.compute.manager.ClusterComputeManager
   cells.driver: nova.cells.rpc_driver.CellsRPCDriver

Adding a generic mechanism like this and having everything configurable
seems like a best option to me.


On 04/08/2014 01:51 AM, Dan Prince wrote:


- Original Message -

From: "Robert Collins" 
To: "OpenStack Development Mailing List" 
Sent: Monday, April 7, 2014 4:00:30 PM
Subject: [openstack-dev] [TripleO] config options, defaults, oh my!

So one interesting thing from the influx of new reviews is lots of
patches exposing all the various plumbing bits of OpenStack. This is
good in some ways (yay, we can configure more stuff), but in some ways
its kindof odd - like - its not clear when
https://review.openstack.org/#/c/83122/ is needed.

I'm keen to expose things that are really needed, but i'm not sure
that /all/ options are needed - what do folk think?

I think we can learn much from some of the more mature configuration management 
tools in the community on this front. Using puppet as an example here (although 
I'm sure other tools may do similar things as well)... Take configuration of 
the Nova API server. There is a direct configuration parameter for 
'neutron_metadata_proxy_shared_secret' in the Puppet nova::api class. This 
parameter is exposed in the class (sort of the equivalent of a TripleO element) 
directly because it is convenient and many users may want to customize the 
value. There are however hundreds of Nova config options and most of them 
aren't exposed as parameters in the various Nova puppet classes. For these it 
is possible to define a nova_config resource to configure *any* nova.conf 
parameter in an ad hoc style for your own installation tuning purposes.

I could see us using a similar model in TripleO where our elements support configuring 
common config elements directly, but we also allow people to tune extra 
"undocumented" options for their own use. There is always going to be a need 
for this as people need to tune things for their own installations with options that may 
not be appropriate for the common set of elements.

Standardizing this mechanism across many of the OpenStack service elements 
would also make a lot of sense. Today we have this for Nova:

nova:
   verbose: False
 - Print more verbose output (set logging level to INFO instead of default 
WARNING level).
   debug: False
 - Print debugging output (set logging level to DEBUG instead of default 
WARNING level).
   baremetal:
 pxe_deploy_timeout: "1200"
   .

I could see us adding a generic mechanism like this to overlay with the 
existing (documented) data structure:

nova:
config:
default.compute_manager: 
ironic.nova.compute.manager.ClusterComputeManager
cells.driver: nova.cells.rpc_driver.CellsRPCDriver

And in this manner a user might be able to add *any* supported config param to 
the element.



Also, some things
really should be higher order operations - like the neutron callback
to nova right - that should be either set to timeout in nova &
configured in neutron, *or* set in both sides appropriately, never
one-half or the other.

I think we need to sort out our approach here to be systematic quite
quickly to deal with these reviews.

I totally agree. I was also planning to email the list about this very issue this week :) 
My email subject was going to be "TripleO templates... an upstream maintenance 
problem".

For the existing reviews today I think we should be somewhat selective about what 
parameters we expose as top level within the elements. That said we are missing some 
rather fundamental features to allow users to configure "undocumented" 
parameters as well. So we need to solve this problem quickly because there are certainly 
some configuration corner that users will need.

As is today we are missing some rather fundamental features in os-apply-config 
and the elements to be able to pull this off. What we really need is a generic 
INI style template generator. Or perhaps we could use something like augeus or 
even devstacks simple ini editing functions to pull this off. In any case the 
idea would be that we allow users to inject their own undocumented config 
parameters into the various service config files. Or perhaps we could 
auto-generate mustache templates based off of the upstream sample config files. 
Many approaches would work here I think...



Here's an attempt to do so - this could become a developers guide patch.

Config options in TripleO
==

Non-API driven configuration falls into four categories:
A - fixed at buildtime (e.g. ld.so path)
B - cluster state derived
C - local machine derived
D - deployer choices

For A, it should be entirely done within the elements concerned.

For B, the heat template should accept parameters to choose the
desired config (e.g. the Neutron->Nova example able) but then express
the config in basic primitives in the i

Re: [openstack-dev] [Tuskar][TripleO] Tuskar Planning for Juno

2014-04-08 Thread Ladislav Smola
Thanks Mainn for putting this together, looks like a fairly precise list 
of things

we need to do in J.

On 04/07/2014 03:36 PM, Tzu-Mainn Chen wrote:

Hi all,

One of the topics of discussion during the TripleO midcycle meetup a few weeks
ago was the direction we'd like to take Tuskar during Juno.  Based on the ideas
presented there, we've created a tentative list of items we'd like to address:

https://wiki.openstack.org/wiki/TripleO/TuskarJunoPlanning

Please feel free to take a look and question, comment, or criticize!


Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Supporting upstream RAs

2014-04-08 Thread Xuhan Peng
Sean,

Sure. Thanks for fixing this.

Xuhan


On Tue, Apr 8, 2014 at 3:42 PM, Da Zhao Y Yu  wrote:

> Hi Sean,
>
> That's OK for me, thanks for your work.
>
>
> Thanks & Best Regards
> Yu Da Zhao(于大钊)
> --
> Cloud Solutions & OpenStack Development
> China Systems & Technology Laboratory in Beijing
> Email: d...@cn.ibm.com
> Tel:   (86)10-82450677
> --
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Issues when running unit tests in OpenStack

2014-04-08 Thread victor stinner
Hi,

I have some issues when running unit tests in OpenStack. I would like to help, 
but I don't know where I should start and how I can fix these bugs. My use case 
is to run unit tests and rerun a single test if one or more tests failed. Well, 
it should be the most basic use case, no?


(1) First problem: if a Python module cannot be loaded, my terminal is flooded 
with a binary stream which looks like:

... 
tCase.test_deprecated_without_replacement\xd7\xe1\x06\xa1\xb3)\x01@l...@atests.unit.test_versionutils.DeprecatedTestCa
 ...

IMO it's a huge bug in the testr tool: "testr run" command should not write 
binary data into stdout. It makes development very hard.


(2) When a test fails, it's hard to find the command to rerun a single failing 
test.

Using the tool trial, I can just copy/paste the "FQDN" name of the failing test 
and run "trial FQDN". Example:

   trial tests.unit.test_timeutils.TestIso8601Time.test_west_normalize

Using the tool nosetests, you have to add a column between the module and the 
method. Example:

   nosetests tests.unit.test_timeutils:TestIso8601Time.test_west_normalize

Using tox, in most OpenStack projects, adding the name of the failing test to 
the tox command is usually ignored. I guess that it's an issue with tox.ini of 
the project? tox rerun the whole test suite which is usually very slow (it 
takes some minutes even on fast computer). Example:

   tox -e py27 tests.unit.test_timeutils.TestIso8601Time.test_west_normalize

I try sometimes to activate the virtualenv and then type:

   testr run tests.unit.test_timeutils.TestIso8601Time.test_west_normalize

It usually fails for different reasons.

Example with python-swiftclient. I run unit tests using "tox -e py33. Some 
tests are failing. I enter the virtual environment and type the following 
command to rerun a failing test:

   testr run tests.test_swiftclient.TestPutObject.test_unicode_ok

The test is not run again, no test is run. It's surprising because the same 
command works with Python 2. It's probably a bug in testr?



(3) testscenarios doesn't work with nosetests. It's annoying because for the 
reasons listed above, I prefer to run tests using nosetests. Why do we use 
testscenarios and not something else? Do we plan to support nosetests (and 
other Python test runners) for testscenarios?


Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Managing changes to the Hot Specification (hot_spec.rst)

2014-04-08 Thread Thomas Spatzier
> From: Steven Dake 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 07/04/2014 21:45
> Subject: Re: [openstack-dev] [heat] Managing changes to the Hot
> Specification (hot_spec.rst)
>
> On 04/07/2014 11:01 AM, Zane Bitter wrote:
> > On 06/04/14 14:23, Steven Dake wrote:
> >> Hi folks,
> >>
> >> There are two problems we should address regarding the growth and
change
> >> to the HOT specification.
> >>
> >> First our +2/+A process for normal changes doesn't totally make sense
> >> for hot_spec.rst.  We generally have some informal bar for
controversial
> >> changes (of which changes to hot_spec.rst is generally considered:).
I
> >> would suggest raising the bar on hot_spec.rst to at-least what is
> >> required for a heat-core team addition (currently 5 approval votes).
> >> This gives folks plenty of time to review and make sure the heat core
> >> team is committed to the changes, rather then a very small 2 member
> >> subset.  Of course a -2 vote from any heat-core would terminate the
> >> review as usual.
> >>
> >> Second, There is a window where we say "hey we want this sweet new
> >> functionality" yet it remains "unimplemented".  I suggest we create
some
> >> special tag for these intrinsics/sections/features, so folks know they
> >> are unimplemented and NOT officially part of the specification until
> >> that is the case.
> >>
> >> We can call this tag something simple like
> >> "*standardization_pending_implementation* for each section which is
> >> unimplemented.  A review which proposes this semantic is here:
> >> https://review.openstack.org/85610
> >
> > This part sounds highly problematic to me.
> >
> > I agree with you and Thomas S that using Gerrit to review proposed
> > specifications is a nice workflow, even if the "proper" place to do
> > this is on the wiki and linked to a blueprint. I would probably go
> > along with everything you suggested provided that anything pending
> > implementation goes in a separate file or files that are _not_
> > included in the generated docs.
> >

Yeah, this would be optimal to be able to use gerrit for shaping it but
excluding it from the published docs.

> This is a really nice idea.  We could have a hot_spec_pending.rst which
> lists out the pending ideas so we can have a gerrit review of this doc.
> The doc wouldn't be generated into the externally rendered documentation.
>
> We could still use blueprints before/after the discussion is had on the
> hot_spec_pending.rst doc, but hot_spec_pending.rst would allow us to
> collaborate properly on the changes.

This could be a pragmatic option. What would be even better would be to
somehow flag sections in hot_spec.rst so they do not get included in the
published docs. This way, we would be able to continuesly merge changes
that come in while features are being implemented (typo fixes,
clarifications of existing public spec etc).

Has someone tried this out already? I read there is something like this for
rst:

.. options
   exclude 

>
> The problem I have with blueprints is they suck for collaborative
> discussion, whereas gerrit rocks for this purpose.  In essence, I just
> want a tidier way to discuss the changes then blueprints provide.

Fully agree. Gerrit is nice for collaboration and enforces discipline.
While BPs and wiki are good but require everyone to really _be_
disciplined ;-)

>
> Other folks on this thread, how do you feel about this approach?
>
> Regards
> -steve
> > cheers,
> > Zane.
> >
> >> My goal is not to add more review work to people's time, but I really
> >> believe any changes to the HOT specification have a profound impact on
> >> all things Heat, and we should take special care when considering
these
> >> changes.
> >>
> >> Thoughts or concerns?
> >>
> >> Regards,
> >> -steve
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Question about modifying instance attribute(such as cpu-QoS, disk-QoS ) without shutdown the instance

2014-04-08 Thread Zhangleiqiang (Trump)
Hi, Stackers, 

For Amazon, after calling ModifyInstanceAttribute API , the instance 
must be stopped. 

In fact, the hypervisor can online-adjust these attribute. But amzon 
and openstack do not support it.

So I want to know what are your advice about introducing the capability 
of online adjusting these instance attribute?


Thanks


--
zhangleiqiang (Trump)

Best Regards


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]

2014-04-08 Thread Ilya Tyaptin
Hi!

In this method "q" is query argument and used for filtering response.

In code you may set this parameter with list of dicts, like this:

q=[{"field": "resource_id", "value": "a", "op": "eq"}]

More examples of queries at
http://docs.openstack.org/developer/ceilometer/webapi/v2.html?highlight=query#api-and-cli-query-examples


On Tue, Apr 8, 2014 at 1:13 AM, Hachem Chraiti  wrote:

> hi erveryone,thats a python code:
>
> from ceilometerclient.v2 import client
>
> ceilometer =client.Client(endpoint='http://controller:8777/v2/resources',
> token='e8e70342225d64d1d20a')
>
> print  ceilometer.resources.list(q)
>
>
> whats this "q" parameter??
>
> Sincerly ,
> Chraiti Hachem,software engineer
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Tyaptin Ilia,

Intern Software Engineer.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues when running unit tests in OpenStack

2014-04-08 Thread Sylvain Bauza
Hi Victor,

This page is worth it : https://wiki.openstack.org/wiki/Testr
Comments inline.

-Sylvain

2014-04-08 10:13 GMT+02:00 victor stinner :

> Hi,
>
> I have some issues when running unit tests in OpenStack. I would like to
> help, but I don't know where I should start and how I can fix these bugs.
> My use case is to run unit tests and rerun a single test if one or more
> tests failed. Well, it should be the most basic use case, no?
>
>
> (1) First problem: if a Python module cannot be loaded, my terminal is
> flooded with a binary stream which looks like:
>
> ... 
> tCase.test_deprecated_without_replacement\xd7\xe1\x06\xa1\xb3)\x01@l...@atests.unit.test_versionutils.DeprecatedTestCa
> ...
>
> IMO it's a huge bug in the testr tool: "testr run" command should not
> write binary data into stdout. It makes development very hard.
>
>

That's happening when testr is trying to locate all unittests classes.
Switching to nosetests without parrallelism can help, as the traceback is
shown there.


>
> (2) When a test fails, it's hard to find the command to rerun a single
> failing test.
>
> Using the tool trial, I can just copy/paste the "FQDN" name of the failing
> test and run "trial FQDN". Example:
>
>trial tests.unit.test_timeutils.TestIso8601Time.test_west_normalize
>
> Using the tool nosetests, you have to add a column between the module and
> the method. Example:
>
>nosetests tests.unit.test_timeutils:TestIso8601Time.test_west_normalize
>
> Using tox, in most OpenStack projects, adding the name of the failing test
> to the tox command is usually ignored. I guess that it's an issue with
> tox.ini of the project? tox rerun the whole test suite which is usually
> very slow (it takes some minutes even on fast computer). Example:
>
>tox -e py27
> tests.unit.test_timeutils.TestIso8601Time.test_west_normalize
>
> I try sometimes to activate the virtualenv and then type:
>
>testr run tests.unit.test_timeutils.TestIso8601Time.test_west_normalize
>
> It usually fails for different reasons.
>
> Example with python-swiftclient. I run unit tests using "tox -e py33. Some
> tests are failing. I enter the virtual environment and type the following
> command to rerun a failing test:
>
>testr run tests.test_swiftclient.TestPutObject.test_unicode_ok
>
> The test is not run again, no test is run. It's surprising because the
> same command works with Python 2. It's probably a bug in testr?
>
>
>
See the wiki page I gave to you. Some helpful tricks are there. That said,
I never had the issue you mentioned related to only checking one unittest
by providing the path.
When I'm isolating one test, tox -epy27  is enough for me.
Run_tests.sh also accepts , which is not necessarly the full
python path for the class or the classmethod to be checked.



>
> (3) testscenarios doesn't work with nosetests. It's annoying because for
> the reasons listed above, I prefer to run tests using nosetests. Why do we
> use testscenarios and not something else? Do we plan to support nosetests
> (and other Python test runners) for testscenarios?
>
>
>
You can run testtools without testr. Nosetests has been marked as
non-supported, IIRC.

-Sylvain


> Victor
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues when running unit tests in OpenStack

2014-04-08 Thread Sylvain Bauza
Made a typo,


2014-04-08 10:41 GMT+02:00 Sylvain Bauza :

>
>> IMO it's a huge bug in the testr tool: "testr run" command should not
>> write binary data into stdout. It makes development very hard.
>>
>>
>
> That's happening when testr is trying to locate all unittests classes.
> Switching to nosetests without parrallelism can help, as the traceback is
> shown there.
>
>

Was meaning you can switch to testtools without testr, not nosetests :-)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues when running unit tests in OpenStack

2014-04-08 Thread victor stinner
Oh, I already got feedback from colleagues :-) Thanks.

> (1) First problem: if a Python module cannot be loaded, my terminal is
> flooded with a binary stream which looks like:
> 
> ...
> tCase.test_deprecated_without_replacement\xd7\xe1\x06\xa1\xb3)\x01@l...@atests.unit.test_versionutils.DeprecatedTestCa
> ...

This issue was reported:
https://bugs.launchpad.net/testrepository/+bug/1271133

It looks to depend on 4 changes in 4 projects:

* subunit: https://code.launchpad.net/~alexei-kornienko/subunit/bug-1271133
* testrepository: 
https://code.launchpad.net/~alexei-kornienko/testrepository/bug-1271133
* testtools: https://github.com/testing-cabal/testtools/pull/77
* Python unittest (for testtools): http://bugs.python.org/issue19746

> (2) When a test fails, it's hard to find the command to rerun a single
> failing test.
> 
> ...
> I try sometimes to activate the virtualenv and then type:
> 
>testr run tests.unit.test_timeutils.TestIso8601Time.test_west_normalize

In the virtual environment, the following command works and is fast:

   python -m testtools.run tests.test_swiftclient.TestPutObject.test_unicode_ok

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march [additional cores]

2014-04-08 Thread Jiří Stránský

On 8.4.2014 01:50, Robert Collins wrote:

tl;dr: 3 more core members to propose:
bnemec
greghaynes
jdon



+1

Jirka


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Icehouse RC2 available

2014-04-08 Thread Thierry Carrez
Hello everyone,

Due to various release-critical issues detected in Keystone icehouse
RC1, a new release candidate was just generated. You can find a list of
the 8 bugs fixed and a link to the RC2 source tarball at:

https://launchpad.net/keystone/icehouse/icehouse-rc2

Unless new release-critical issues are found that warrant a release
candidate respin, this RC2 will be formally released as the 2014.1 final
version on April 17 next week. You are therefore strongly encouraged to
test and validate this tarball !

Alternatively, you can directly test the milestone-proposed branch at:
https://github.com/openstack/keystone/tree/milestone-proposed

If you find an issue that could be considered release-critical and
justify a release candidate respin, please file it at:

https://bugs.launchpad.net/keystone/+filebug

and tag it *icehouse-rc-potential* to bring it to the release crew's
attention.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] use of the "oslo" namespace package

2014-04-08 Thread Julien Danjou
On Mon, Apr 07 2014, Doug Hellmann wrote:

> We can avoid adding to the problem by putting each new library in its
> own package. We still want the Oslo name attached for libraries that
> are really only meant to be used by OpenStack projects, and so we need
> a naming convention. I'm not entirely happy with the "crammed
> together" approach for oslotest and oslosphinx. At one point Dims and
> I talked about using a prefix "oslo_" instead of just "oslo", so we
> would have "oslo_db", "oslo_i18n", etc. That's also a bit ugly,
> though. Opinions?

Honestly, I think it'd be better to not have oslo at all and use
independent – if possible explicit – names for everything

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo][Neutron] Tripleo & Neutron

2014-04-08 Thread mar...@redhat.com
On 07/04/14 18:05, Jan Provazník wrote:
> On 04/07/2014 03:49 PM, Roman Podoliaka wrote:
> 2. HA the neutron node. For each neutron services/agents of
> interest (neutron-dhcp-agent, neutron-l3-agent,
> neutron-lbaas-agent ... ) fix any issues with running these in
> HA - perhaps there are none \o/? Useful whether using a
> dedicated Neutron node or just for HA the undercloud-control
> node
>>
>> - HA for DHCP-agent is provided out-of-box - we can just use
>> 'dhcp_agents_per_network' option
>> (https://github.com/openstack/tripleo-image-elements/blob/master/elements/neutron/os-apply-config/etc/neutron/neutron.conf#L59)
>>
>>
>>  - for L3-agent there is a BP started, but the patches haven't been
>> merged yet  -
>> https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
>>
> 
> Right, though a patch which adds A/A for L3 agent has already been sent:
> https://review.openstack.org/#/c/64553/ so we might test how it works.
> 
> There is a trello card for Neutron HA here:
> https://trello.com/c/DaIs1zxb/82-neutron-ha-redundant-environment

thanks for the pointers Jan!

> 
> 
> To add more questions:
> Would it be possible to use Neutron's LBaaS in TripleO (which is the
> long-term plan) and what are missing bits?

Good question - you told me you had heard of 'technical issues' with
baremetal... does anyone know if these still exist with Ironic? I guess
another point for investigation.

> 
> And (maybe trivial) question:
> for present time (when we don't use Neutron's LBaaS), is there a simple
> way how to get virtual IPs from Neutron? We set up haproxy in overcloud
> controller nodes, but we need to allocate virtual IP pointing to haproxy.

I don't know, perhaps others can chime in here (enikanorov may be the
person to ask here - he is the lead for the lbaas subteam)

thanks, marios

> 
> Jan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo][Neutron] Tripleo & Neutron

2014-04-08 Thread mar...@redhat.com
On 07/04/14 16:49, Roman Podoliaka wrote:
> Hi all,
> 
> Perhaps, we should file a design session for Neutron-specific questions?

that's a good idea - unfortunately I won't be at summit... if there is
more interest and you do go ahead with this please let me know I will
try and join by hangout for example.

> 
 1. Define a neutron node (tripleo-image-elements/disk-image-builder) and 
 make sure it deploys and scales ok (tripleo-heat-templates/tuskar). This 
 comes under by lifeless blueprint at 
 https://blueprints.launchpad.net/tripleo/+spec/tripleo-tuskar-deployment-scaling-topologies
> 
> As far as I understand, this must be pretty straightforward: just
> reuse the neutron elements we need when building an image for a
> neutron node.

Right - not all of these points I listed need blueprints (or perhaps
none ;) ) but for many it will be just verification that 'it works'... I
agree that we should just be able to build the image with whichever
neutron services we need there. I expect that getting the plumbing right
may be the issue here (heat template params, credentials, database etc)


> 
 2. HA the neutron node. For each neutron services/agents of interest 
 (neutron-dhcp-agent, neutron-l3-agent, neutron-lbaas-agent ... ) fix any 
 issues with running these in HA - perhaps there are none \o/? Useful 
 whether using a dedicated Neutron node or just for HA the 
 undercloud-control node
> 
> - HA for DHCP-agent is provided out-of-box - we can just use
> 'dhcp_agents_per_network' option
> (https://github.com/openstack/tripleo-image-elements/blob/master/elements/neutron/os-apply-config/etc/neutron/neutron.conf#L59)
> 
> - for L3-agent there is a BP started, but the patches haven't been
> merged yet  - 
> https://blueprints.launchpad.net/neutron/+spec/l3-high-availability

thanks! Seems I am the only one that didn't know of this blueprint :)

> 
> - API must be no different from other API services we have

not sure what that means ^^^

> 
 3. Does it play with Ironic OK? I know there were some issues with Ironic 
 and Neutron DHCP, though I think this has now been addressed. Other 
 known/unkown bugs/issues with Ironic/Neutron - the baremetal driver will 
 be deprecated at some point...
> 
> You must be talking about specifying PXE boot options by the means of
> neutron-dhcp-agent. Yes, this has been merged to Neutron for a while
> now (https://review.openstack.org/#/c/30441/).

thanks - not sure if that was it as the issues I heard about were more
recent than that (from colleagues working on that, I could chase up
specifics if necessary). In any case my point was rather more general...
Ironic is obviously still new and so are there any current known or
expected issues we need to deal with

thanks, marios

> 
> Thanks,
> Roman
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo][Neutron] Tripleo & Neutron

2014-04-08 Thread mar...@redhat.com
On 07/04/14 15:56, Dmitriy Shulyak wrote:
> Hi Marios, thanks for raising this.
> 
> There is in progress blueprint that should address some issues with neutron
> ha deployment -
> https://blueprints.launchpad.net/neutron/+spec/l3-high-availability.
> 
> Right now neutron-dhcp agent can be configured as active/active.
> 
> But l3-agent and metadata-agent still should be active/passive,
> afaik the best approach would be to use corosync+pacemaker, that is also
> stated in official documentation
> http://docs.openstack.org/high-availability-guide/content/ch-network.html.
> 
> What other choices, except  corosync+pacemaker, do we have for neutron ha?

thanks for the pointers Dmitriy! Perhaps this can be discussed if a
discussion/session is put together at summit as suggested by Roman

marios

> 
> Thanks
> 
> 
> 
> On Mon, Apr 7, 2014 at 11:18 AM, mar...@redhat.com wrote:
> 
>> Hello Tripleo/Neutron:
>>
>> I've recently found some cycles to look into Neutron. Mostly because
>> networking rocks, but also so we can perhaps better address Neutron
>> related issues/needs down the line. I thought it may be good to ask the
>> wider team if there are others that are also interested in
>> Neutron&Tripleo. We could form a loose focus group to discuss blueprints
>> and review each other's code/chase up with cores. My search may have
>> missed earlier discussions in openstack-dev[Tripleo][Neutron] and
>> Tripleo bluprints so my apologies if this has already been started
>> somewhere. If any of the above is of interest then:
>>
>> *is the following list sane - does it make sense to pick these off or
>> are these 'nice to haves' but not of immediate concern? Even just
>> validating, prioritizing and recording concerns could be worthwhile for
>> example?
>> * are you interested in discussing any of the following further and
>> perhaps investigating and/or helping with blueprints where/if necessary?
>>
>> Right now I have:
>>
>> [Undercloud]:
>>
>> 1. Define a neutron node (tripleo-image-elements/disk-image-builder) and
>> make sure it deploys and scales ok (tripleo-heat-templates/tuskar). This
>> comes under by lifeless blueprint at
>>
>> https://blueprints.launchpad.net/tripleo/+spec/tripleo-tuskar-deployment-scaling-topologies
>>
>> 2. HA the neutron node. For each neutron services/agents of interest
>> (neutron-dhcp-agent, neutron-l3-agent, neutron-lbaas-agent ... ) fix any
>> issues with running these in HA - perhaps there are none \o/? Useful
>> whether using a dedicated Neutron node or just for HA the
>> undercloud-control node
>>
>> 3. Does it play with Ironic OK? I know there were some issues with
>> Ironic and Neutron DHCP, though I think this has now been addressed.
>> Other known/unkown bugs/issues with Ironic/Neutron - the baremetal
>> driver will be deprecated at some point...
>>
>> 4. Subnetting. Right now the undercloud uses a single subnet. Does it
>> make sense to have multiple subnets here - one point I've heard is for
>> segregation of your undercloud nodes (i.e. <1 broadcast domain).
>>
>> 5. Security. Are we at least using Neutron as we should be in the
>> Undercloud, security-groups, firewall rules etc?
>>
>> [Overcloud]:
>>
>> 1. Configuration. In the overcloud "it's just Neutron". So one concern
>> is which and how to expose neutron configuration options via Tuskar-UI.
>> We would pass these through the deployment heat-template for definition
>> of Neutron plugin-specific .conf files (like dnsmasq-neutron.conf) for
>> example or initial definition of tenant subnets and router(s) for access
>> to external networks.
>>
>> 2. 3. ???
>>
>>
>> thanks! marios
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues when running unit tests in OpenStack

2014-04-08 Thread victor stinner
Sylvain Bauza wrote:
>> (2) When a test fails, it's hard to find the command to rerun a single
>> failing test.
> (...) 
> 
> See the wiki page I gave to you. Some helpful tricks are there. That said, I
> never had the issue you mentioned related to only checking one unittest by
> providing the path.
> When I'm isolating one test, tox -epy27  is enough for me.

For the "parameter ignore", here is an example with Oslo Incubator (which uses 
testr, at least for Python 2):

$ tox -e py27
...
FAIL: tests.unit.test_timeutils.TestIso8601Time.test_west_normalize
...
Ran 8675 (+8673) tests in 16.137s (+16.083s)
...

$ tox -e py27 tests.unit.test_timeutils.TestIso8601Time.test_west_normalize 
...
FAIL: tests.unit.test_timeutils.TestIso8601Time.test_west_normalize
...
Ran 8675 (+8455) tests in 17.250s (-39.332s)

> You can run testtools without testr. Nosetests has been marked as
> non-supported, IIRC.

Oslo Incubator runs unit tests using nosetests. I tried to use testr, but it 
fails on Python 3 because the openstack.common.eventlet_backdoor module cannot 
be loaded (eventlet is not available on Python 3).

How can I ignore openstack.common.eventlet_backdoor using testr? I understood 
that testr first loads all modules and then filter them using the regex.

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Operators & Design Summit ideas for Atlanta

2014-04-08 Thread Steven Hardy
On Wed, Apr 02, 2014 at 08:24:00AM -0500, Dolph Mathews wrote:
> On Mon, Mar 31, 2014 at 10:40 PM, Adam Young  wrote:
> 
> > On 03/28/2014 03:01 AM, Tom Fifield wrote:
> >
> >> Thanks to those projects that responded. I've proposed sessions in swift,
> >> ceilometer, tripleO and horizon.
> >>
> >
> >
> > Keystone would also be interested in user feedback, of course.
> 
> 
> Crossing openstack-dev threads [1] here, gathering feedback on proposed
> deprecations would be a great topic for such a session.
> 
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2014-April/031652.html

+1, I think a cross-project session on deprecation strategy/process would be
hugely beneficial, particularly if we can solicit feedback from operators
and deployers at the same time to agree a workable process.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Operators & Design Summit ideas for Atlanta

2014-04-08 Thread Steven Hardy
On Fri, Mar 28, 2014 at 03:01:30PM +0800, Tom Fifield wrote:
> Thanks to those projects that responded. I've proposed sessions in
> swift, ceilometer, tripleO and horizon.

I just created a session for Heat:

http://summit.openstack.org/cfp/details/247

Historically Heat sessions have been quite well attended by operators and
deployers with lots of real-world feedback from users.

However I still think having a specific session dedicated to this
discussion could be good, and would potentially serve to provide some
additional focus and defininition between user-visible and internal-design
discussions.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-08 Thread Vladimir Kozhukalov
Guys, thank you very much for your comments,

I thought a lot about why we need to be so limited in IPA use cases. Now it
much clearer for me. Indeed, having some kind of agent running inside host
OS is not what many people want to see. And now I'd rather agree with that.

But there are still some questions which are difficult to answer for me.
0) There are a plenty of old hardware which does not have IPMI/ILO at all.
How Ironic is supposed to power them off and on? Ssh? But Ironic is not
supposed to interact with host OS.
1) We agreed that Ironic is that place where we can store hardware info
('extra' field in node model). But many modern hardware configurations
support hot pluggable hard drives, CPUs, and even memory. How Ironic will
know that hardware configuration is changed? Does it need to know about
hardware changes at all? Is it supposed that some monitoring agent (NOT
ironic agent) will be used for that? But if we already have discovering
extension in Ironic agent, then it sounds rational to use this extension
for monitoring as well. Right?
2) When I deal with some kind of hypervisor, I can always use 'virsh list
--all' command in order to know which nodes are running and which aren't.
How am I supposed to know which nodes are still alive in case of Ironic?
IPMI? Again IPMI is not always available. And if IPMI is available, then
why do we need heartbeat in Ironic agent?



Vladimir Kozhukalov


On Fri, Apr 4, 2014 at 9:46 PM, Ezra Silvera  wrote:

> > Ironic's responsibility ends where the host OS begins. Ironic is a bare
> metal provisioning service, not a configuration management service.
>
> I agree with the above, but just to clarify I would say that Ironic
> shouldn't *interact*  with the host OS once it booted. Obviously it can
> still perform BM tasks underneath the OS (while it's up and running)  if
> needed (e.g., force shutdown through IPMI, etc..)
>
>
>
>
>
> Ezra
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-08 Thread Day, Phil
> -Original Message-
> From: Robert Collins [mailto:robe...@robertcollins.net]
> Sent: 07 April 2014 21:01
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [TripleO] config options, defaults, oh my!
> 
> So one interesting thing from the influx of new reviews is lots of patches
> exposing all the various plumbing bits of OpenStack. This is good in some
> ways (yay, we can configure more stuff), but in some ways its kindof odd -
> like - its not clear when https://review.openstack.org/#/c/83122/ is needed.
> 
> I'm keen to expose things that are really needed, but i'm not sure that /all/
> options are needed - what do folk think? 

I'm very wary of trying to make the decision in TripleO of what should and 
shouldn't be configurable in some other project.For sure the number of 
config options in Nova is a problem, and one that's been discussed many times 
at summits.   However I think you could also make the case/assumption for any 
service that the debate about having a config option has already been held 
within that service as part of the review that merged that option in the code - 
re-running the debate about whether something should be configurable via 
TripleO feels like some sort of policing function on configurability above and 
beyond what the experts in that service have already considered, and that 
doesn't feel right to me.

Right now TripleO has a very limited view of what can be configured, based as I 
understand on primarily what's needed for its CI job.  As more folks who have 
real deployments start to look at using TripleO its inevitable that they are 
going to want to enable the settings that are important to them to be 
configured.  I can't imagine that anyone is going to add a configuration value 
for the sake of it, so can't we start with the perspective that we are slowly 
exposing the set of values that do need to be configured ?


>Also, some things really should be higher order operations - like the neutron 
>callback to nova right - that should
> be either set to timeout in nova & configured in neutron, *or* set in both
> sides appropriately, never one-half or the other.
> 
> I think we need to sort out our approach here to be systematic quite quickly
> to deal with these reviews.
> 
> Here's an attempt to do so - this could become a developers guide patch.
> 
> Config options in TripleO
> ==
> 
> Non-API driven configuration falls into four categories:
> A - fixed at buildtime (e.g. ld.so path) B - cluster state derived C - local
> machine derived D - deployer choices
> 
> For A, it should be entirely done within the elements concerned.
> 
> For B, the heat template should accept parameters to choose the desired
> config (e.g. the Neutron->Nova example able) but then express the config in
> basic primitives in the instance metadata.
> 
> For C, elements should introspect the machine (e.g. memory size to
> determine mysql memory footprint) inside os-refresh-config scripts; longer
> term we should make this an input layer to os-collect-config.
> 
> For D, we need a sensible parameter in the heat template and probably
> direct mapping down to instance metadata.
> 
I understand the split, but all of the reviews in question seem to be in D, so 
I'm not sure this helps much.  


> But we have a broader question - when should something be configurable at
> all?
> 
> In my mind we have these scenarios:
> 1) There is a single right answer
> 2) There are many right answers
> 
> An example of 1) would be any test-only option like failure injection
> - the production value is always 'off'. For 2), hypervisor driver is a great
> example - anything other than qemu is a valid production value
> :)
> 
> But, it seems to me that these cases actually subdivide further -
> 1a) single right answer, and the default is the right answer
> 1b) single right answer and it is not the default
> 2a) many right answers, and the default is the most/nearly most common
> one
> 2b) many right answers, and the default is either not one of them or is a
> corner case
> 
> So my proposal here - what I'd like to do as we add all these config options 
> to
> TripleO is to take the care to identify which of A/B/C/D they are and code
> them appropriately, and if the option is one of 1b) or 2b) make sure there is 
> a
> bug in the relevant project about the fact that we're having to override a
> default. If the option is really a case of 1a) I'm not sure we want it
> configurable at all.
> 

I'm not convinced that anyone is in a position to judge that there is a single 
right answer - I know the values that are right for my deployments, but I'm not 
arrogant enough to say that they universally applicable.You only have to 
see the  wide range of Openstack Deployments presented at every summit to know 
that that there a lot of different use cases out there.   My worry is that if 
we try to have that debate in the context of a TripleO review, then we'll just 
spin between op

Re: [openstack-dev] [oslo] use of the "oslo" namespace package

2014-04-08 Thread Victor Stinner
Hi,

Le mardi 8 avril 2014, 10:54:24 Julien Danjou a écrit :
> On Mon, Apr 07 2014, Doug Hellmann wrote:
> > We can avoid adding to the problem by putting each new library in its
> > own package. We still want the Oslo name attached for libraries that
> > are really only meant to be used by OpenStack projects, and so we need
> > a naming convention. I'm not entirely happy with the "crammed
> > together" approach for oslotest and oslosphinx. At one point Dims and
> > I talked about using a prefix "oslo_" instead of just "oslo", so we
> > would have "oslo_db", "oslo_i18n", etc. That's also a bit ugly,
> > though. Opinions?
> 
> Honestly, I think it'd be better to not have oslo at all and use
> independent – if possible explicit – names for everything

I agree.

"oslo" name remembers me the "zope" fiasco. Except of zope.interfaces and the 
ZODB, I don't think that any Zope module was widely used outside Zope and it 
was a big fail. Because of that, Zope 3 restarted almost from scratch with 
small independent modules.

"oslo" and "openstack.common" look more and more like Zope bloated modules. 
For example, Oslo Incubator has 44 dependencies. Who outside OpenStack would 
like to use a module which has 44 dependencies? Especially if you need a 
single module like timeutils.

"nova.openstack.common.timeutils" name doesn't look correct: the Zen of Python 
says "Flat is better than nested": "xxx.timeutils" would be better. Same 
remark for "oslo.config.cfg" => "xxx.cfg".

Choosing a name is hard. Dropping "oslo" requires to find a completly new name. 
For example, "oslo.config" cannot be renamed to "config", this name is already 
used on PyPI. Same issue for "messaging" (and "message" is also reserved).

"oslo.rootwrap" can be simply renamed to "rootwrap".

Other suggestions:

* olso.config => cmdconfig
* olso.messaging => msqqueue

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] use of the "oslo" namespace package

2014-04-08 Thread Thierry Carrez
Vishvananda Ishaya wrote:
> I dealt with this myself the other day and it was a huge pain. That said,
> changing all the packages seems like a nuclear option.

Yeah, package renaming is usually a huge pain for distributions, and we
already forced them through some oslo reversioning/renaming processes in
the past... so this should really be seen as the last solution. It's
also a pain for infra (repo renames) and developers who need to push the
package rename down all projects.

That said, if we are to do it, better do it at the start of Juno before
we graduate new libraries.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Enabling ServerGroup filters by default (was RE: [nova] Server Groups are not an optional element, bug or feature ?)

2014-04-08 Thread Day, Phil
> https://bugs.launchpad.net/nova/+bug/1303983
> 
> --
> Russell Bryant

Wow - was there really a need to get that change merged within 12 hours and 
before others had a chance to review and comment on it ?

I see someone has already queried (post the merge) if there isn't a performance 
impact.

I've raised this point before - but apart from non-urgent security fixes 
shouldn't there be a minimum review period to make sure that all relevant 
feedback can be given ?

Phil 

> -Original Message-
> From: Russell Bryant [mailto:rbry...@redhat.com]
> Sent: 07 April 2014 20:38
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Server Groups are not an optional
> element, bug or feature ?
> 
> On 04/07/2014 02:12 PM, Russell Bryant wrote:
> > On 04/07/2014 01:43 PM, Day, Phil wrote:
> >> Generally the scheduler's capabilities that are exposed via hints can
> >> be enabled or disabled in a Nova install by choosing the set of filters
> >> that are configured. However the server group feature doesn't fit
> >> that pattern - even if the affinity filter isn't configured the
> >> anti-affinity check on the server will still impose the anti-affinity
> >> behavior via throwing the request back to the scheduler.
> >>
> >> I appreciate that you can always disable the server-groups API
> >> extension, in which case users can't create a group (and so the
> >> server create will fail if one is specified), but that seems kind of
> >> at odds with other type of scheduling that has to be specifically 
> >> configured
> in
> >> rather than out of a base system.In particular having the API
> >> extension in by default but the ServerGroup Affinity and AntiAffinity
> >> filters not in by default seems an odd combination (it kind of works,
> >> but only by a retry from the host and that's limited to a number of
> >> retries).
> >>
> >> Given that the server group work isn't complete yet (for example the
> >> list of instances in a group isn't tided up when an instance is
> >> deleted) I feel a tad worried that the current default configuration
> >> exposed this rather than keeping it as something that has to be
> >> explicitly enabled - what do others think ?
> >
> > I consider it a complete working feature.  It makes sense to enable
> > the filters by default.  It's harmless when the API isn't used.  That
> > was just an oversight.
> >
> > The list of instances in a group through the API only shows
> > non-deleted instances.
> >
> > There are some implementation details that could be improved (the
> > check on the server is the big one).
> >
> 
> https://bugs.launchpad.net/nova/+bug/1303983
> 
> --
> Russell Bryant
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-08 Thread Sean Dague
On 04/08/2014 06:03 AM, Day, Phil wrote:
>> -Original Message-
>> From: Robert Collins [mailto:robe...@robertcollins.net]
>> Sent: 07 April 2014 21:01
>> To: OpenStack Development Mailing List
>> Subject: [openstack-dev] [TripleO] config options, defaults, oh my!
>>
>> So one interesting thing from the influx of new reviews is lots of patches
>> exposing all the various plumbing bits of OpenStack. This is good in some
>> ways (yay, we can configure more stuff), but in some ways its kindof odd -
>> like - its not clear when https://review.openstack.org/#/c/83122/ is needed.
>>
>> I'm keen to expose things that are really needed, but i'm not sure that /all/
>> options are needed - what do folk think? 
> 
> I'm very wary of trying to make the decision in TripleO of what should and 
> shouldn't be configurable in some other project.For sure the number of 
> config options in Nova is a problem, and one that's been discussed many times 
> at summits.   However I think you could also make the case/assumption for any 
> service that the debate about having a config option has already been held 
> within that service as part of the review that merged that option in the code 
> - re-running the debate about whether something should be configurable via 
> TripleO feels like some sort of policing function on configurability above 
> and beyond what the experts in that service have already considered, and that 
> doesn't feel right to me.
> 
> Right now TripleO has a very limited view of what can be configured, based as 
> I understand on primarily what's needed for its CI job.  As more folks who 
> have real deployments start to look at using TripleO its inevitable that they 
> are going to want to enable the settings that are important to them to be 
> configured.  I can't imagine that anyone is going to add a configuration 
> value for the sake of it, so can't we start with the perspective that we are 
> slowly exposing the set of values that do need to be configured ?

I think Phil is dead on. I'll also share the devstack experience here.
Until we provided the way for arbitrary pass through we were basically
getting a few patches every week that were "let me configure this
variable in the configs" over and over again.

I 100% agree that the config parameter space is huge and out of control,
and I actively challenge when people add new config options in Nova,
however those are knobs that people are using. If you limit what's
allowed to be configured, you limit the use of the tool. Like the old
adage about the fact that everyone only uses 10% of the functionality of
MS Word (sadly they don't all use the same 10%).

There was a really good proposal on the table by Mark MC a few cycles
ago about annotating the config options in projects with things like
'debug', 'tuning', so that it would be clear what variables we expected
people to change, and what variables we assume only experts would
change. I think if there is desire to push on the config explosion, that
would be the right place to do it.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Trove] Managed Instances Feature

2014-04-08 Thread Day, Phil
Its more than just non-admin,  it also allows a user to lock an instance so 
that they don’t accidentally perform some operation on a VM.

At one point it was (by default) an admin only operation on the OSAPI, but its 
always been open to all users in EC2.   Recently it was changed so that admin 
and non-admin locks are considered as separate things.

From: Chen CH Ji [mailto:jiche...@cn.ibm.com]
Sent: 08 April 2014 07:13
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Trove] Managed Instances Feature


the instance lock is a mechanism that prevent non-admin user to operate on the 
instance (resize, etc, looks to me snapshot is not currently included)
the permission is a wider concept that major in API layer to allow or prevent 
user in using the API , guess instance lock might be enough for prevent 
instance actions .


Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: 
jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, 
Beijing 100193, PRC

[Inactive hide details for "Hopper, Justin" ---04/08/2014 02:05:02 PM---Phil, I 
am reviewing the existing “check_instance_lock]"Hopper, Justin" ---04/08/2014 
02:05:02 PM---Phil, I am reviewing the existing “check_instance_lock” 
implementation to see

From: "Hopper, Justin" mailto:justin.hop...@hp.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>,
Date: 04/08/2014 02:05 PM
Subject: Re: [openstack-dev] [Nova][Trove] Managed Instances Feature





Phil,

I am reviewing the existing “check_instance_lock” implementation to see
how it might be leveraged.  Off the cuff, it looks pretty much what we
need.  I need to look into the permissions to better understand how one
can “lock” and instance.

Thanks for the guidance.


Justin Hopper
Software Engineer - DBaaS
irc: juice | gpg: EA238CF3 | twt: @justinhopper




On 4/7/14, 10:01, "Day, Phil" mailto:philip@hp.com>> 
wrote:

>I can see the case for Trove being to create an instance within a
>customer's tenant (if nothing else it would make adding it onto their
>Neutron network a lot easier), but I'm wondering why it really needs to
>be hidden from them ?
>
>If the instances have a name that makes it pretty obvious that Trove
>created them, and the user presumably knows that did this from Trove, why
>hide them  ?I'd of thought that would lead to a whole bunch of
>confusion and support calls when they  try to work out why they are out
>of quota and can only see subset of the instances being counted by the
>system.
>
>If the need is to stop the users doing something with those instances
>then maybe we need an extension to the lock mechanism such that a lock
>can be made by a specific user (so the trove user in the same tenant
>could lock the instance so that a non-trove user in that tenant couldn’t
>unlock ).  We already have this to an extent, in that an instance locked
>by an admin can' t be unlocked by the owner, so I don’t think it would be
>too hard to build on that.   Feels like that would be a lot more
>transparent than trying to obfuscate the instances themselves.
>
>> -Original Message-
>> From: Hopper, Justin
>> Sent: 06 April 2014 01:37
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Nova][Trove] Managed Instances Feature
>>
>> Russell,
>>
>> Thanks for the quick reply. If I understand what you are suggesting it
>>is that
>> there would be one Trove-Service Tenant/User that owns all instances
>>from
>> the perspective of Nova.  This was one option proposed during our
>> discussions.  However, what we thought would be best is to continue to
>>use
>> the user credentials so that Nova has the correct association.  We
>>wanted a
>> more substantial and deliberate relationship between Nova and a
>> dependent service.  In this relationship, Nova would acknowledge which
>> instances are being managed by which Services and while ownership was
>>still
>> to that of the User, management/manipulation of said Instance would be
>> solely done by the Service.
>>
>> At this point the guard that Nova needs to provide around the instance
>>does
>> not need to be complex.  It would even suffice to keep those instances
>> hidden from such operations as ³nova list² when invoked by directly by
>>the
>> user.
>>
>> Thanks,
>>
>> Justin Hopper
>> Software Engineer - DBaaS
>> irc: juice | gpg: EA238CF3 | twt: @justinhopper
>>
>>
>>
>>
>> On 4/5/14, 14:20, "Russell Bryant" 
>> mailto:rbry...@redhat.com>> wrote:
>>
>> >On 04/04/2014 08:12 PM, Hopper, Justin wrote:
>> >> Greetings,
>> >>
>> >> I am trying to address an issue from certain perspectives and I think
>> >> some support from Nova may be needed.
>> >>
>> >> _Problem_
>> >> Services like Trove use run in Nova Com

Re: [openstack-dev] [nova] Server Groups are not an optional element, bug or feature ?

2014-04-08 Thread Day, Phil


> -Original Message-
> From: Russell Bryant [mailto:rbry...@redhat.com]
> Sent: 07 April 2014 19:12
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Server Groups are not an optional
> element, bug or feature ?
> 
>
...
 
> I consider it a complete working feature.  It makes sense to enable the 
> filters
> by default.  It's harmless when the API isn't used.  That was just an 
> oversight.
>
> The list of instances in a group through the API only shows non-deleted
> instances.

True, but the lack of even a soft delete on the rows in the 
instance_group_member worries me  - its not clear why that wasn't fixed  rather 
than just hiding the deleted instances.I'd of expected the full DB 
lifecycle to implemented before something was considered as a complete working 
feature.

> 
> There are some implementation details that could be improved (the check
> on the server is the big one).
> 
> --
> Russell Bryant
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-08 Thread Khanh-Toan Tran
“Abusive usage” : If user can request anti-affinity VMs, then why doesn’t
he uses that? This will result in user constantly requesting all his VMs
being in the same anti-affinity group. This makes scheduler choose one
physical host per VM. This will quickly flood the infrastructure and mess
up with the objective of admin (e.g. Consolidation that regroup VM instead
of spreading, spared hosts, etc) ; at some time it will be reported back
that there is no host available, which appears as a bad experience for
user.





De : Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Envoyé : mardi 8 avril 2014 01:02
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [Nova] Hosts within two Availability Zones :
possible or not ?



On 3 April 2014 08:21, Khanh-Toan Tran 
wrote:

Otherwise we cannot provide redundancy to client except using Region which
is dedicated infrastructure and networked separated and anti-affinity
filter which IMO is not pragmatic as it has tendency of abusive usage.



I'm sorry, could you explain what you mean here by 'abusive usage'?

--

Ian.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Split Oslo Incubator?

2014-04-08 Thread Victor Stinner
(Follow-up of the "[olso] use of the "oslo" namespace package" thread)

Hi,

The openstack.common module also known as "Oslo Incubator" or "OpenStack 
Common Libraries" has 44 dependencies. IMO we reach a point where it became 
too huge. Would it be possible to split it into smaller parts and distribute 
it on PyPI with a stable API? I don't know Olso Incubator enough to suggest 
the best granularity. A hint can be the number of dependencies.

Sharing code is a good idea, but now we have SQLAchmey, WSGI, cryptographic, 
RPC, etc. in the same module. Who needs all these features at once? Olso 
Incubator must be usable outside OpenStack.


Currently, Oslo Incubator is installed and updated manually using a 
"update.sh" script which copy ".py" files and replace "openstack.common" with 
"nova.openstack.common" (where nova is the name of the project where Oslo 
Incubator is installed).

I guess that update.sh was written to solve the two following points, tell me 
if I'm wrong:

 - unstable API: the code changes too often, whereas users don't want to 
update their code regulary. Nova has maybe an old version of Olso Incubator 
because of that.

 - only copy a few files to avoid a lot of dependencies and copy useless files

Smaller modules should solve these issues. They should be used as module: 
installed system-wide, not copied in each project. So fixing a bug would only 
require a single change, without having to "synchronize" each project.


Yesterday, I proposed to add a new time_monotonic() function to the timeutils 
module. We asked me to enhance existing modules (like Monotime).

We should now maybe move code from Oslo Incubator to "upstream" projects. For 
example, timeutils extends the iso8601 module. We should maybe contribute to 
this project and replace usage of timeutils with directy call to iso8601?

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Trove] Managed Instances Feature

2014-04-08 Thread Day, Phil
Hi Justin,

Glad you like the idea of using lock ;-) 

I still think you need some more granularity that user or admin - currently for 
Trove to lock the users  VMs as admin it would need an account that has admin 
rights across the board in Nova, and I don't think folks would want to delegate 
that much power to Trove.

Also the folks who genuinely need to enforce an admin level lock on a VM 
(normally if there is some security issue with the VM) wouldn’t want Trove to 
be able to unlock it.

So I think we're on the right lines, but needs more thinking about how to get a 
bit more granularity - I'm thinking of some other variant of lock that fits 
somewhere between the current user and admin locks, and is controlled via 
policy by a specific role, so you have something like:

User without AppLock role  - can apply/remove user lock to instance.Cannot 
perform operations is any lock is set on the instance
User with AppLock role - can apply/remove application lock to instance.   
Cannot perform operations on the instance if the admin lock is set
User with Admin role - can apply/remove admin lock.   Can perform any 
operations on the instance

Phil

> -Original Message-
> From: Hopper, Justin
> Sent: 07 April 2014 19:01
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova][Trove] Managed Instances Feature
> 
> Phil,
> 
> I think you “lock” concept is more along the lines of what we are looking for.
> Hiding them is not a requirement.  Preventing the user from using Nova
> directly on those Instances is.  So locking it with an “Admin” user so that 
> they
> could not snapshot, resize it directly in Nova would be great.  When they use
> the Trove API, Trove, as Admin, could “unlock” those Instances, make the
> modification and then “lock” them after it is complete.
> 
> Thanks,
> 
> Justin Hopper
> Software Engineer - DBaaS
> irc: juice | gpg: EA238CF3 | twt: @justinhopper
> 
> 
> 
> 
> On 4/7/14, 10:01, "Day, Phil"  wrote:
> 
> >I can see the case for Trove being to create an instance within a
> >customer's tenant (if nothing else it would make adding it onto their
> >Neutron network a lot easier), but I'm wondering why it really needs to
> >be hidden from them ?
> >
> >If the instances have a name that makes it pretty obvious that Trove
> >created them, and the user presumably knows that did this from Trove,
> why
> >hide them  ?I'd of thought that would lead to a whole bunch of
> >confusion and support calls when they  try to work out why they are out
> >of quota and can only see subset of the instances being counted by the
> >system.
> >
> >If the need is to stop the users doing something with those instances
> >then maybe we need an extension to the lock mechanism such that a lock
> >can be made by a specific user (so the trove user in the same tenant
> >could lock the instance so that a non-trove user in that tenant
> >couldn’t unlock ).  We already have this to an extent, in that an
> >instance locked by an admin can' t be unlocked by the owner, so I don’t
> think it would be
> >too hard to build on that.   Feels like that would be a lot more
> >transparent than trying to obfuscate the instances themselves.
> >
> >> -Original Message-
> >> From: Hopper, Justin
> >> Sent: 06 April 2014 01:37
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [Nova][Trove] Managed Instances Feature
> >>
> >> Russell,
> >>
> >> Thanks for the quick reply. If I understand what you are suggesting
> >>it is that  there would be one Trove-Service Tenant/User that owns all
> >>instances from  the perspective of Nova.  This was one option proposed
> >>during our  discussions.  However, what we thought would be best is to
> >>continue to use  the user credentials so that Nova has the correct
> >>association.  We wanted a  more substantial and deliberate
> >>relationship between Nova and a  dependent service.  In this
> >>relationship, Nova would acknowledge which  instances are being
> >>managed by which Services and while ownership was still  to that of
> >>the User, management/manipulation of said Instance would be  solely
> >>done by the Service.
> >>
> >> At this point the guard that Nova needs to provide around the
> >>instance does  not need to be complex.  It would even suffice to keep
> >>those instances  hidden from such operations as ³nova list² when
> >>invoked by directly by the  user.
> >>
> >> Thanks,
> >>
> >> Justin Hopper
> >> Software Engineer - DBaaS
> >> irc: juice | gpg: EA238CF3 | twt: @justinhopper
> >>
> >>
> >>
> >>
> >> On 4/5/14, 14:20, "Russell Bryant"  wrote:
> >>
> >> >On 04/04/2014 08:12 PM, Hopper, Justin wrote:
> >> >> Greetings,
> >> >>
> >> >> I am trying to address an issue from certain perspectives and I
> >> >> think some support from Nova may be needed.
> >> >>
> >> >> _Problem_
> >> >> Services like Trove use run in Nova Compute Instances.  These
> >> >> Services tr

Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-08 Thread Day, Phil
On a large cloud you're protect against this to some extent if the number of 
servers is >> number of instances in the quota.

However it does feel that there are a couple of things missing to really 
provide some better protection:


-  A quota value on the maximum size of a server group

-  A policy setting so that the ability to use service-groups can be 
controlled on a per project basis

From: Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com]
Sent: 08 April 2014 11:32
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] Hosts within two Availability Zones : 
possible or not ?

"Abusive usage" : If user can request anti-affinity VMs, then why doesn't he 
uses that? This will result in user constantly requesting all his VMs being in 
the same anti-affinity group. This makes scheduler choose one physical host per 
VM. This will quickly flood the infrastructure and mess up with the objective 
of admin (e.g. Consolidation that regroup VM instead of spreading, spared 
hosts, etc) ; at some time it will be reported back that there is no host 
available, which appears as a bad experience for user.


De : Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Envoyé : mardi 8 avril 2014 01:02
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [Nova] Hosts within two Availability Zones : 
possible or not ?

On 3 April 2014 08:21, Khanh-Toan Tran 
mailto:khanh-toan.t...@cloudwatt.com>> wrote:
Otherwise we cannot provide redundancy to client except using Region which
is dedicated infrastructure and networked separated and anti-affinity
filter which IMO is not pragmatic as it has tendency of abusive usage.

I'm sorry, could you explain what you mean here by 'abusive usage'?
--
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Split Oslo Incubator?

2014-04-08 Thread Roman Podoliaka
Hi Victor,

>>> The openstack.common module also known as "Oslo Incubator" or "OpenStack 
>>> Common Libraries" has 44 dependencies. IMO we reach a point where it became 
>>> too huge. Would it be possible to split it into smaller parts and 
>>> distribute it on PyPI with a stable API? I don't know Olso Incubator enough 
>>> to suggest the best granularity. A hint can be the number of dependencies.

This is exactly what we've been doing in Icehouse (and are going to
continue to do this in Juno). In terms of oslo-incubator it's called
'graduation' of an incubator's part - it becomes a full-fledged
library distributed via PyPi.

>>> Sharing code is a good idea, but now we have SQLAchmey, WSGI, 
>>> cryptographic, RPC, etc. in the same module. Who needs all these features 
>>> at once? Olso Incubator must be usable outside OpenStack.

Sure! But I'd say even now one can use/sync only the particular
modules of oslo-incubator he/she needs. Though, I agree, releasing
these modules as libraries would simplify reusing of the code.

>>> We should now maybe move code from Oslo Incubator to "upstream" projects. 
>>> For example, timeutils extends the iso8601 module. We should maybe 
>>> contribute to this project and replace usage of timeutils with directy call 
>>> to iso8601?

Agreed. I can't say for other libraries, but in oslo.db we've been
contributing features and bug fixes to SQLAlchemy, alembic and
SQLAlchemy-migrate. But we are still going to have some code, that
won't be merged by upstream, just because it covers a too specific use
case for them (e.g. 'deleted' column which is provided by one of
oslo.db models mixin).

Thanks,
Roman

On Tue, Apr 8, 2014 at 1:35 PM, Victor Stinner
 wrote:
> (Follow-up of the "[olso] use of the "oslo" namespace package" thread)
>
> Hi,
>
> The openstack.common module also known as "Oslo Incubator" or "OpenStack
> Common Libraries" has 44 dependencies. IMO we reach a point where it became
> too huge. Would it be possible to split it into smaller parts and distribute
> it on PyPI with a stable API? I don't know Olso Incubator enough to suggest
> the best granularity. A hint can be the number of dependencies.
>
> Sharing code is a good idea, but now we have SQLAchmey, WSGI, cryptographic,
> RPC, etc. in the same module. Who needs all these features at once? Olso
> Incubator must be usable outside OpenStack.
>
>
> Currently, Oslo Incubator is installed and updated manually using a
> "update.sh" script which copy ".py" files and replace "openstack.common" with
> "nova.openstack.common" (where nova is the name of the project where Oslo
> Incubator is installed).
>
> I guess that update.sh was written to solve the two following points, tell me
> if I'm wrong:
>
>  - unstable API: the code changes too often, whereas users don't want to
> update their code regulary. Nova has maybe an old version of Olso Incubator
> because of that.
>
>  - only copy a few files to avoid a lot of dependencies and copy useless files
>
> Smaller modules should solve these issues. They should be used as module:
> installed system-wide, not copied in each project. So fixing a bug would only
> require a single change, without having to "synchronize" each project.
>
>
> Yesterday, I proposed to add a new time_monotonic() function to the timeutils
> module. We asked me to enhance existing modules (like Monotime).
>
> We should now maybe move code from Oslo Incubator to "upstream" projects. For
> example, timeutils extends the iso8601 module. We should maybe contribute to
> this project and replace usage of timeutils with directy call to iso8601?
>
> Victor
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Split Oslo Incubator?

2014-04-08 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 08/04/14 12:35, Victor Stinner wrote:
> (Follow-up of the "[olso] use of the "oslo" namespace package"
> thread)
> 
> Hi,
> 
> The openstack.common module also known as "Oslo Incubator" or
> "OpenStack Common Libraries" has 44 dependencies. IMO we reach a
> point where it became too huge. Would it be possible to split it
> into smaller parts and distribute it on PyPI with a stable API? I
> don't know Olso Incubator enough to suggest the best granularity. A
> hint can be the number of dependencies.
> 

The code put into oslo-incubator is intended to stay there until API
is stable enough. Once it matures, it should be moved to a separate
library. So oslo-incubator cannot be considered as a library with
stable API, by design.

> Sharing code is a good idea, but now we have SQLAchmey, WSGI,
> cryptographic, RPC, etc. in the same module. Who needs all these
> features at once? Olso Incubator must be usable outside OpenStack.
> 
> 
> Currently, Oslo Incubator is installed and updated manually using a
>  "update.sh" script which copy ".py" files and replace
> "openstack.common" with "nova.openstack.common" (where nova is the
> name of the project where Oslo Incubator is installed).
> 
> I guess that update.sh was written to solve the two following
> points, tell me if I'm wrong:
> 
> - unstable API: the code changes too often, whereas users don't
> want to update their code regulary. Nova has maybe an old version
> of Olso Incubator because of that.
> 
> - only copy a few files to avoid a lot of dependencies and copy
> useless files
> 

Yes, you're right about the intended motivation in both cases.

> Smaller modules should solve these issues. They should be used as
> module: installed system-wide, not copied in each project. So
> fixing a bug would only require a single change, without having to
> "synchronize" each project.
> 

That's exactly where the Oslo team currently heads to - moving code
from oslo-incubator to separate modules and eventually obsoleting
oslo-incubator.

> 
> Yesterday, I proposed to add a new time_monotonic() function to the
> timeutils module. We asked me to enhance existing modules (like
> Monotime).
> 
> We should now maybe move code from Oslo Incubator to "upstream"
> projects. For example, timeutils extends the iso8601 module. We
> should maybe contribute to this project and replace usage of
> timeutils with directy call to iso8601?
> 

Indeed, those features that may be used outside Openstack should be
merged to existing modules, or separate modules. There is still
openstack-specific code though that should be left under the 'oslo' hood.

> Victor
> 
> ___ OpenStack-dev
> mailing list OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTQ+NqAAoJEC5aWaUY1u57AuQH/2kQLYfIVVKJTBir6J2gk+X/
3ClBKX2H7RB2jr/CzP42BKkFfCpuHzCISHa3S4RqjY0zIT/ieGei77ynaPt6f5jx
kBYimUeTGAxhUkQuPDMeYsz9ZqtZ5JYprj+TKGi+nE6st3qFG+9zxvEl/YLzXFFy
lHaJzFfqqehaGlsEZBruCJGW+ZsNn9pY9WNzDagADj8XKx6KabKI47T0iMmCAgvd
ZMLZjXw6wtvR+/x8ADwdEmJksYPR6ANa9gszqg0myHygwnFA8l7th1m8EjsQN+t8
Y14qTaELOeWxfOsSeLsfnjODZBowOtzHJeWybKdYADwyeVhwkYWeheCDp5HSMk4=
=qmnD
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-08 Thread James Slagle
On Tue, Apr 8, 2014 at 6:03 AM, Day, Phil  wrote:
>> -Original Message-
>> From: Robert Collins [mailto:robe...@robertcollins.net]
>> Sent: 07 April 2014 21:01
>> To: OpenStack Development Mailing List
>> Subject: [openstack-dev] [TripleO] config options, defaults, oh my!
>>
>> So one interesting thing from the influx of new reviews is lots of patches
>> exposing all the various plumbing bits of OpenStack. This is good in some
>> ways (yay, we can configure more stuff), but in some ways its kindof odd -
>> like - its not clear when https://review.openstack.org/#/c/83122/ is needed.
>>
>> I'm keen to expose things that are really needed, but i'm not sure that /all/
>> options are needed - what do folk think?
>
> I'm very wary of trying to make the decision in TripleO of what should and 
> shouldn't be configurable in some other project.For sure the number of 
> config options in Nova is a problem, and one that's been discussed many times 
> at summits.   However I think you could also make the case/assumption for any 
> service that the debate about having a config option has already been held 
> within that service as part of the review that merged that option in the code 
> - re-running the debate about whether something should be configurable via 
> TripleO feels like some sort of policing function on configurability above 
> and beyond what the experts in that service have already considered, and that 
> doesn't feel right to me.

This captures very well what I was thinking as well. I just don't
think it should be a question in TripleO if a config option should be
exposed or not.  I don't think any of the reviews we've had recently,
or are in the queue, folks have added just for the sake of exposing
more options. I assume it's because they actually wanted to tweak
those options.



>> So my proposal here - what I'd like to do as we add all these config options 
>> to
>> TripleO is to take the care to identify which of A/B/C/D they are and code
>> them appropriately, and if the option is one of 1b) or 2b) make sure there 
>> is a
>> bug in the relevant project about the fact that we're having to override a
>> default. If the option is really a case of 1a) I'm not sure we want it
>> configurable at all.
>>

> If the root cause of the problem is that every change to exploit a 
> configurable feature of a deployed service needs an explicit matching change 
> in TripleO, maybe that's a degree of tight coupling that needs to be 
> addressed by having a more generic mechanism ?

Indeed, I think the implementation we have for configuration right now
that uses templates is a bit of a maintenance burden as Dan said.
Every time you want to expose a new configuration option, you need a
change to tripleo-image-elements to add it to the template, and
corresponding change to tripleo-heat-templates so that the option can
be set via Heat metadata.

This is similar I think (if not the same) as what Dan is proposing,
but I'd really like to see something along these lines instead:
- an implementation that used sample or generated config files for all
the OpenStack projects instead of templates
- a way for elements to specify their configuration for TripleO when
we can't use upstream defaults (json/yaml structure, etc). This
structure would get backed into the image for os-collect-config to
pick up and consume. Also a way for other elements to override others,
e.g., Nova element has some default configs, then the Ironic might
need to override some of those.
- enhance os-apply-config to have it support additional ways (iniset,
crudini, augeas, whatever) to modify config files instead of just
templatizing so that we can apply that baked-in config and additional
config from Heat when not using templates.

That solves the question of "what to expose" in the elements. Since
we're using sample/generated config files from the projects,
everything is exposed by default. When the upstream project adds a new
config option, it automatically gets exposed. Half of the ongoing
review burden is gone right there, with no developer guide needed
either.

Also part of the win...we'd have polished config files that look like
upstreams, except for the values of course :-). And, this also helps
get us going down the road of making the elements more discoverable in
terms of what can be configured and what the values are for TripleO,
something that I know Tuskar is very interested in having.

The question then becomes about what to do about exposing these
options in the Heat templates. We could keep the templates as we have
today, and just add parameters for the options that we know are useful
to us for the CD and CI use cases. We could then document how a user
would go about adding additional parameters to the templates that they
wanted exposed. Perhaps even give them a way to "merge" their custom
template snippets that add Parameters into those from
tripleo-heat-templates.

And, in fact, I think in most cases it *wouldn't* be a case 

Re: [openstack-dev] [Tuskar][TripleO] Tuskar Planning for Juno

2014-04-08 Thread Jaromir Coufal


On 2014/07/04 15:36, Tzu-Mainn Chen wrote:

Hi all,

One of the topics of discussion during the TripleO midcycle meetup a few weeks
ago was the direction we'd like to take Tuskar during Juno.  Based on the ideas
presented there, we've created a tentative list of items we'd like to address:

https://wiki.openstack.org/wiki/TripleO/TuskarJunoPlanning

Please feel free to take a look and question, comment, or criticize!


Thanks,
Tzu-Mainn Chen


Thanks Mainn for the write up, it looks like a good summary for J cycle. 
I didn't find any disagreements from my side.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march [additional cores]

2014-04-08 Thread Jaromir Coufal


On 2014/08/04 01:50, Robert Collins wrote:

tl;dr: 3 more core members to propose:
bnemec
greghaynes
jdon


jdon -> jdob

+1 for all the folks.

-- Jarda



On 4 April 2014 08:55, Chris Jones  wrote:

Hi

+1 for your proposed -core changes.

Re your question about whether we should retroactively apply the 3-a-day
rule to the 3 month review stats, my suggestion would be a qualified no.

I think we've established an agile approach to the member list of -core, so
if there are a one or two people who we would have added to -core before the
goalposts moved, I'd say look at their review quality. If they're showing
the right stuff, let's get them in and helping. If they don't feel our new
goalposts are achievable with their workload, they'll fall out again
naturally before long.


So I've actioned the prior vote.

I said: "Bnemec, jdob, greg etc - good stuff, I value your reviews
already, but..."

So... looking at a few things - long period of reviews:
60 days:
|greghaynes   | 1210  22  99   0   081.8% |
14 ( 11.6%)  |
|  bnemec | 1160  38  78   0   067.2% |
10 (  8.6%)  |
|   jdob  |  870  15  72   0   082.8% |
4 (  4.6%)  |

90 days:

|  bnemec | 1450  40 105   0   072.4% |
17 ( 11.7%)  |
|greghaynes   | 1420  23 119   0   083.8% |
22 ( 15.5%)  |
|   jdob  | 1060  17  89   0   084.0% |
7 (  6.6%)  |

Ben's reviews are thorough, he reviews across all contributors, he
shows good depth of knowledge and awareness across tripleo, and is
sensitive to the pragmatic balance between 'right' and 'good enough'.
I'm delighted to support him for core now.

Greg is very active, reviewing across all contributors with pretty
good knowledge and awareness. I'd like to see a little more contextual
awareness though - theres a few (but not many) reviews where looking
at how the big picture of things fitting together more would have been
beneficial. *however*, I think that's a room-to-improve issue vs
not-good-enough-for-core - to me it makes sense to propose him for
core too.

Jay's reviews are also very good and consistent, somewhere between
Greg and Ben in terms of bigger-context awareness - so another
definite +1 from me.

-Rob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Enabling ServerGroup filters by default (was RE: [nova] Server Groups are not an optional element, bug or feature ?)

2014-04-08 Thread Russell Bryant
On 04/08/2014 06:16 AM, Day, Phil wrote:
>> https://bugs.launchpad.net/nova/+bug/1303983
>>
>> --
>> Russell Bryant
> 
> Wow - was there really a need to get that change merged within 12 hours and 
> before others had a chance to review and comment on it ?

It was targeted against RC2 which we're trying to get out ASAP.  The
change is harmless.

> I see someone has already queried (post the merge) if there isn't a 
> performance impact.

The commit message indicates that when the API is not used, the
scheduler filters are a no-op.  There is no noticable performance impact.

> I've raised this point before - but apart from non-urgent security fixes 
> shouldn't there be a minimum review period to make sure that all relevant 
> feedback can be given ?

Separate topic, but no, I do not think there should be any rules on
this.  I think in the majority of cases, people do the right thing.

In this case, the patch was incredibly trivial and has no performance
impact, so I don't see anything wrong.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-08 Thread Jaromir Coufal

On 2014/03/04 13:02, Robert Collins wrote:

Getting back in the swing of things...

Hi,
like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Dan Prince for -core
  - Jordan O'Mara for removal from -core
  - Jiri Tomasek for removal from -core



  - Jamomir Coufal for removal from -core


+1 Not involved much in TripleO code.


Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.


+1 to all changes

[snip]

-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups are not an optional element, bug or feature ?

2014-04-08 Thread Russell Bryant
On 04/08/2014 06:29 AM, Day, Phil wrote:
> 
> 
>> -Original Message-
>> From: Russell Bryant [mailto:rbry...@redhat.com]
>> Sent: 07 April 2014 19:12
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [nova] Server Groups are not an optional
>> element, bug or feature ?
>>
>>
> ...
>  
>> I consider it a complete working feature.  It makes sense to enable the 
>> filters
>> by default.  It's harmless when the API isn't used.  That was just an 
>> oversight.
>>
>> The list of instances in a group through the API only shows non-deleted
>> instances.
> 
> True, but the lack of even a soft delete on the rows in the 
> instance_group_member worries me  - its not clear why that wasn't fixed  
> rather than just hiding the deleted instances.I'd of expected the full DB 
> lifecycle to implemented before something was considered as a complete 
> working feature.

We were thinking that there may be a use for being able to query a full
list of instances (including the deleted ones) for a group.  The API
just hasn't made it that far yet.  Just hiding them for now leaves room
to iterate and doesn't prevent either option (exposing the deleted
instances, or changing to auto-delete them from the group).

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Get Keystone user details

2014-04-08 Thread Naveen Kumar.S
For an user with role as "Member" , how to get the contents of "extra" column 
from user table in keystone DB using python keystone 
API. Also for a user  who is already logged in from horizon how can this column 
be extracted on Django side.



Thanks,
Naveen.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march [additional cores]

2014-04-08 Thread Chris Jones
Hi

+1

Cheers,
--
Chris Jones

> On 8 Apr 2014, at 00:50, Robert Collins  wrote:
> 
> tl;dr: 3 more core members to propose:
> bnemec
> greghaynes
> jdon
> 
> 
>> On 4 April 2014 08:55, Chris Jones  wrote:
>> Hi
>> 
>> +1 for your proposed -core changes.
>> 
>> Re your question about whether we should retroactively apply the 3-a-day
>> rule to the 3 month review stats, my suggestion would be a qualified no.
>> 
>> I think we've established an agile approach to the member list of -core, so
>> if there are a one or two people who we would have added to -core before the
>> goalposts moved, I'd say look at their review quality. If they're showing
>> the right stuff, let's get them in and helping. If they don't feel our new
>> goalposts are achievable with their workload, they'll fall out again
>> naturally before long.
> 
> So I've actioned the prior vote.
> 
> I said: "Bnemec, jdob, greg etc - good stuff, I value your reviews
> already, but..."
> 
> So... looking at a few things - long period of reviews:
> 60 days:
> |greghaynes   | 1210  22  99   0   081.8% |
> 14 ( 11.6%)  |
> |  bnemec | 1160  38  78   0   067.2% |
> 10 (  8.6%)  |
> |   jdob  |  870  15  72   0   082.8% |
> 4 (  4.6%)  |
> 
> 90 days:
> 
> |  bnemec | 1450  40 105   0   072.4% |
> 17 ( 11.7%)  |
> |greghaynes   | 1420  23 119   0   083.8% |
> 22 ( 15.5%)  |
> |   jdob  | 1060  17  89   0   084.0% |
> 7 (  6.6%)  |
> 
> Ben's reviews are thorough, he reviews across all contributors, he
> shows good depth of knowledge and awareness across tripleo, and is
> sensitive to the pragmatic balance between 'right' and 'good enough'.
> I'm delighted to support him for core now.
> 
> Greg is very active, reviewing across all contributors with pretty
> good knowledge and awareness. I'd like to see a little more contextual
> awareness though - theres a few (but not many) reviews where looking
> at how the big picture of things fitting together more would have been
> beneficial. *however*, I think that's a room-to-improve issue vs
> not-good-enough-for-core - to me it makes sense to propose him for
> core too.
> 
> Jay's reviews are also very good and consistent, somewhere between
> Greg and Ben in terms of bigger-context awareness - so another
> definite +1 from me.
> 
> -Rob
> 
> 
> 
> 
> -- 
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-08 Thread Chris Jones
Hi

> On 8 Apr 2014, at 11:20, Sean Dague  wrote:
> 
> I think Phil is dead on. I'll also share the devstack experience here.
> Until we provided the way for arbitrary pass through we were basically
> getting a few patches every week that were "let me configure this
> variable in the configs" over and over again.

+1

We can't be in the business of prescribing what users can/can't configure in 
the daemons they are using us to deploy.

Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-08 Thread Avishay Traeger
On Tue, Apr 8, 2014 at 9:17 AM, Deepak Shetty  wrote:

> Hi List,
> I had few Qs on the implementation of manage_existing and unmanage API
> extns
>
> 1) For LVM case, it renames the lv.. isn't it better to use name_id (one
> used during cinder migrate to keep id same for a diff backend name/id) to
> map cinder name/id to backend name/id and thus avoid renaming the backend
> storage. Renaming isn't good since it changes the original name of the
> storage object and hence storage admin may lose track? The Storwize uses
> UID and changes vdisk_name on the backend array which isn't good either. Is
> renaming a must, if yes why ?
>

'name_id' is an ID, like c8b3d8e2-2410-4362-b24b-548a13fa850b.
In migration, both the original and new volumes use the same template for
volume names, just with a different ID, so name_id works well for that.
 When importing a volume that wasn't created by Cinder, chances are it
won't conform to this template, and so name_id won't work (i.e., I can call
the volume 'my_very_important_db_volume', and name_id can't help with
that).  When importing, the admin should give the volume a proper name and
description, and won't lose track of it - it is now being managed by Cinder.


> 2) How about a force rename option can be provided ? if force = yes, use
> rename otherwise name_id ?
>

As I mentioned, name_id won't work.  You would need some DB changes to
accept ANY volume name, and it can get messy.


> 3) Durign unmanage its good if we can revert the name back (in case it was
> renamed as part of manage), so that we leave the storage object as it was
> before it was managed by cinder ?
>

I don't see any compelling reason to do this.

Thanks,
Avishay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] nominations for Climate PTL for Juno cycle are now open

2014-04-08 Thread Sergey Lukjanov
Hey folks,

the nomination period was already ended and there is only one
candidate, so, there is no need to setup voting.
The Climate PTL for Juno wiki page [0] updated.

So, the Climate PTL for Juno cycle is Dina Belova, congratulations!

Thanks.

[0] https://wiki.openstack.org/wiki/Climate/PTL_Elections_Juno#PTL

On Sat, Mar 29, 2014 at 10:43 AM, Sergey Lukjanov
 wrote:
> Hi folks,
>
> as it was discussed on the last Climate meeting, we're running
> elections for the Climate PTL for Juno cycle. Schedule and policies
> are fully aligned with official OpenStack PTLs elections.
>
> You can find more info in official Juno elections wiki page [0] and
> the same page for Climate elections [1], additionally some more info
> in official nominations opening email [2].
>
> Timeline:
>
> till 05:59 UTC April 4, 2014: Open candidacy to PTL positions
> April 4, 2014 - 1300 UTC April 11, 2014: PTL elections
>
> To announce your candidacy please start a new openstack-dev at
> lists.openstack.org mailing list thread with the following subject:
> "[climate] PTL Candidacy".
>
> [0] https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014
> [1] https://wiki.openstack.org/wiki/Climate/PTL_Elections_Juno
> [2] http://lists.openstack.org/pipermail/openstack-dev/2014-March/031239.html
>
> Thank you.
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march [additional cores]

2014-04-08 Thread Imre Farkas

On 04/08/2014 01:50 AM, Robert Collins wrote:

tl;dr: 3 more core members to propose:
bnemec
greghaynes
jdon


+1

Imre


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] run tests using testr on Python 3

2014-04-08 Thread victor stinner
Hi,

Olso Incubator runs tests using testr on Python 2, but it uses nosetests on 
Python 3 to only run a subset of the test suite (modules and tests ported to 
Python 3). In my latest patch for Oslo Incubator (gettext), Ben Nemec wrote:
"I think we could get around the nose issue by using a testr regex to filter 
the tests we run for py33 (like I did in py27 for running everything but rpc in 
parallel), but that's outside the scope of this change so +2."

I tried to run Olso Incubator tests with testr on Python 3, but testr fails to 
load "openstack.common.eventlet_backdoor", because eventlet module is not 
installed (eventlet is not Python 3 compatible yet). If I understood correctly, 
testr first loads all modules and then filter the tests to run using the regex 
passed on the command line. If I'm correct, I don't see right now how to run 
Olso Incubator tests with testr on Python 3. But I don't know well the Testr 
tool, so I missed probably an option.

I would like to use testr because many Olso Incubator tests use testscenarios 
(which doesn't work with nosetests).

By the way, would it be possible to fix nosetests to use testscenarios?

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Heat] The Neutron API and orchestration

2014-04-08 Thread CARVER, PAUL
Zane Bitter wrote:

>(1) Create a network
>Instinctively, I want a Network to be something like a virtual VRF 
>(VVRF?): a separate namespace with it's own route table, within which 
>subnet prefixes are not overlapping, but which is completely independent 
>of other Networks that may contain overlapping subnets. As far as I can 
>tell, this basically seems to be the case. The difference, of course, is 
>that instead of having to configure a VRF on every switch/router and 
>make sure they're all in sync and connected up in the right ways, I just 
>define it in one place globally and Neutron does the rest. I call this 
>#winning. Nice work, Neutron.

This is your main misunderstanding and the source of most, but not all
of the rest of your issues. A "network" in Neutron is NOT equivalent
to a VRF. A "network" is really just a single LAN segment (i.e. a single
broadcast domain.) It allows the use of multiple subnets on the same
broadcast domain, which is generally not a great idea, but doesn't
violate any standards and is sometimes useful.

There is no construct in Neutron to represent an entire
network in the sense that most networking people use the word
(i.e., multiple broadcast domains interconnected via routers.)

A router in Neutron also doesn't really represent the same thing
that most networking people mean by the word, at least not yet.
A router in Neutron is basically a NAT box like a home Linksys/
Netgear/etc, not a Cisco ASR or Juniper M or T series. Most notably
it doesn't run routing protocols. It doesn't handle route redistribution,
it doesn't handle queuing and QoS, ACL support is only preliminary, etc.

So your expectation of being able to orchestrate a "real" network in
the sense of a collection of LAN segments and routers and global
routing tables and topology isn't native to Neutron. So the question
is whether that overarching orchestration should be in Heat using
only the primitives that Neutron currently provides or whether
Neutron should be extended to include entire networks in the
sense that you and I would tend to define the word.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] cliff 1.6.1 released

2014-04-08 Thread Doug Hellmann
Version 1.6.1 of cliff has been released. This is a bug-fix release to
correct an issue with the shell output formatter, and isn't a critical
upgrade unless you are using the shell output for interpreting cliff
output in bash scripts.

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march [additional cores]

2014-04-08 Thread Derek Higgins
On 08/04/14 00:50, Robert Collins wrote:
> tl;dr: 3 more core members to propose:
> bnemec
> greghaynes
> jdon

+1 for all

> 
> 
> On 4 April 2014 08:55, Chris Jones  wrote:
>> Hi
>>
>> +1 for your proposed -core changes.
>>
>> Re your question about whether we should retroactively apply the 3-a-day
>> rule to the 3 month review stats, my suggestion would be a qualified no.
>>
>> I think we've established an agile approach to the member list of -core, so
>> if there are a one or two people who we would have added to -core before the
>> goalposts moved, I'd say look at their review quality. If they're showing
>> the right stuff, let's get them in and helping. If they don't feel our new
>> goalposts are achievable with their workload, they'll fall out again
>> naturally before long.
> 
> So I've actioned the prior vote.
> 
> I said: "Bnemec, jdob, greg etc - good stuff, I value your reviews
> already, but..."
> 
> So... looking at a few things - long period of reviews:
> 60 days:
> |greghaynes   | 1210  22  99   0   081.8% |
> 14 ( 11.6%)  |
> |  bnemec | 1160  38  78   0   067.2% |
> 10 (  8.6%)  |
> |   jdob  |  870  15  72   0   082.8% |
> 4 (  4.6%)  |
> 
> 90 days:
> 
> |  bnemec | 1450  40 105   0   072.4% |
> 17 ( 11.7%)  |
> |greghaynes   | 1420  23 119   0   083.8% |
> 22 ( 15.5%)  |
> |   jdob  | 1060  17  89   0   084.0% |
> 7 (  6.6%)  |
> 
> Ben's reviews are thorough, he reviews across all contributors, he
> shows good depth of knowledge and awareness across tripleo, and is
> sensitive to the pragmatic balance between 'right' and 'good enough'.
> I'm delighted to support him for core now.
> 
> Greg is very active, reviewing across all contributors with pretty
> good knowledge and awareness. I'd like to see a little more contextual
> awareness though - theres a few (but not many) reviews where looking
> at how the big picture of things fitting together more would have been
> beneficial. *however*, I think that's a room-to-improve issue vs
> not-good-enough-for-core - to me it makes sense to propose him for
> core too.
> 
> Jay's reviews are also very good and consistent, somewhere between
> Greg and Ben in terms of bigger-context awareness - so another
> definite +1 from me.
> 
> -Rob
> 
> 
> 
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-08 Thread Jim Rollenhagen

Guys, thank you very much for your comments,

I thought a lot about why we need to be so limited in IPA use cases. Now it 
much clearer for me. Indeed, having some kind of agent running inside host OS 
is not what many people want to see. And now I'd rather agree with that.

But there are still some questions which are difficult to answer for me.
0) There are a plenty of old hardware which does not have IPMI/ILO at all. How 
Ironic is supposed to power them off and on? Ssh? But Ironic is not supposed to 
interact with host OS. 
I’m not sure about this yet. I’m inclined to say “we don’t support such 
hardware”, at least in the short-term. How does Ironic handle hardware without 
a power management interface today?


1) We agreed that Ironic is that place where we can store hardware info 
('extra' field in node model). But many modern hardware configurations support 
hot pluggable hard drives, CPUs, and even memory. How Ironic will know that 
hardware configuration is changed? Does it need to know about hardware changes 
at all? Is it supposed that some monitoring agent (NOT ironic agent) will be 
used for that? But if we already have discovering extension in Ironic agent, 
then it sounds rational to use this extension for monitoring as well. Right?
I believe that hardware changes should not be made while an instance is 
deployed to a node (except maybe swapping a dead stick of RAM or something). If 
a user wants a node with more RAM (for example), they should provision a new 
node and destroy the old one, just like they would do with VMs provisioned by 
Nova.


2) When I deal with some kind of hypervisor, I can always use 'virsh list 
--all' command in order to know which nodes are running and which aren't. How 
am I supposed to know which nodes are still alive in case of Ironic? IPMI? 
Again IPMI is not always available. And if IPMI is available, then why do we 
need heartbeat in Ironic agent?
Every power driver today has some sort of “power status” command that Ironic 
relies on to tell if the node is alive, and I think we can continue to rely on 
this. We have a heartbeat in the agent to ensure that the agent process is 
still alive and reachable, as the agent might run for a long time before an 
instance is deployed to the node, and bugs happen.

Is that helpful?

// jim





Vladimir Kozhukalov


On Fri, Apr 4, 2014 at 9:46 PM, Ezra Silvera  wrote:
> Ironic's responsibility ends where the host OS begins. Ironic is a bare metal 
> provisioning service, not a configuration management service.

I agree with the above, but just to clarify I would say that Ironic shouldn't 
*interact*  with the host OS once it booted. Obviously it can still perform BM 
tasks underneath the OS (while it's up and running)  if needed (e.g., force 
shutdown through IPMI, etc..)





Ezra



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] run tests using testr on Python 3

2014-04-08 Thread Julien Danjou
On Tue, Apr 08 2014, victor stinner wrote:

> I would like to use testr because many Olso Incubator tests use
> testscenarios (which doesn't work with nosetests).

What about using both for now, nosetests + testr for the files you need
that have testscenarios?

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march [additional cores]

2014-04-08 Thread James Slagle
On Mon, Apr 7, 2014 at 7:50 PM, Robert Collins
 wrote:
> tl;dr: 3 more core members to propose:
> bnemec
> greghaynes
> jdob

+1 to all. I've valued the feedback from these individuals as both
fellow reviewers and on my submitted patches.



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-08 Thread Jay Pipes
On Tue, 2014-04-08 at 10:49 +, Day, Phil wrote:
> On a large cloud you’re protect against this to some extent if the
> number of servers is >> number of instances in the quota.
> 
> However it does feel that there are a couple of things missing to
> really provide some better protection: 
> 
> - A quota value on the maximum size of a server group
> - A policy setting so that the ability to use service-groups
> can be controlled on a per project basis 

Alternately, we could just have the affinity filters serve as weighting
filters instead of returning NoValidHosts.

That way, a request containing an affinity hint would cause the
scheduler to prefer placing the new VM near (or not-near) other
instances in the server group, but if no hosts exist that meet that
criteria, the filter simply finds a host with the most (or fewest, in
case of anti-affinity) instances that meet the affinity criteria.

Best,
-jay

> From: Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com] 
> Sent: 08 April 2014 11:32
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova] Hosts within two Availability
> Zones : possible or not ?
> 
> “Abusive usage” : If user can request anti-affinity VMs, then why
> doesn’t he uses that? This will result in user constantly requesting
> all his VMs being in the same anti-affinity group. This makes
> scheduler choose one physical host per VM. This will quickly flood the
> infrastructure and mess up with the objective of admin (e.g.
> Consolidation that regroup VM instead of spreading, spared hosts,
> etc) ; at some time it will be reported back that there is no host
> available, which appears as a bad experience for user. 
> 
>  
> 
>  
> 
> De : Ian Wells [mailto:ijw.ubu...@cack.org.uk] 
> Envoyé : mardi 8 avril 2014 01:02
> À : OpenStack Development Mailing List (not for usage questions)
> Objet : Re: [openstack-dev] [Nova] Hosts within two Availability
> Zones : possible or not ?
> 
> 
>  
> 
> On 3 April 2014 08:21, Khanh-Toan Tran 
> wrote:
> 
> Otherwise we cannot provide redundancy to client except using
> Region which
> is dedicated infrastructure and networked separated and
> anti-affinity
> filter which IMO is not pragmatic as it has tendency of
> abusive usage.
> 
> 
>  
> 
> 
> I'm sorry, could you explain what you mean here by 'abusive usage'?
> 
> 
> 
> -- 
> 
> 
> Ian.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march [additional cores]

2014-04-08 Thread Jan Provaznik

On 04/08/2014 01:50 AM, Robert Collins wrote:

tl;dr: 3 more core members to propose:
bnemec
greghaynes
jdon



+1 to all

Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-08 Thread Clint Byrum
Excerpts from Day, Phil's message of 2014-04-08 03:03:41 -0700:
> > -Original Message-
> > From: Robert Collins [mailto:robe...@robertcollins.net]
> > Sent: 07 April 2014 21:01
> > To: OpenStack Development Mailing List
> > Subject: [openstack-dev] [TripleO] config options, defaults, oh my!
> > 
> > So one interesting thing from the influx of new reviews is lots of patches
> > exposing all the various plumbing bits of OpenStack. This is good in some
> > ways (yay, we can configure more stuff), but in some ways its kindof odd -
> > like - its not clear when https://review.openstack.org/#/c/83122/ is needed.
> > 
> > I'm keen to expose things that are really needed, but i'm not sure that 
> > /all/
> > options are needed - what do folk think? 
> 
> I'm very wary of trying to make the decision in TripleO of what should
> and shouldn't be configurable in some other project.For sure the
> number of config options in Nova is a problem, and one that's been
> discussed many times at summits.   However I think you could also make
> the case/assumption for any service that the debate about having a config
> option has already been held within that service as part of the review
> that merged that option in the code - re-running the debate about whether
> something should be configurable via TripleO feels like some sort of
> policing function on configurability above and beyond what the experts in
> that service have already considered, and that doesn't feel right to me.
> 
> Right now TripleO has a very limited view of what can be configured,
> based as I understand on primarily what's needed for its CI job.
> As more folks who have real deployments start to look at using TripleO
> its inevitable that they are going to want to enable the settings that
> are important to them to be configured.  I can't imagine that anyone is
> going to add a configuration value for the sake of it, so can't we start
> with the perspective that we are slowly exposing the set of values that
> do need to be configured ?
> 



> > 
> > So my proposal here - what I'd like to do as we add all these config 
> > options to
> > TripleO is to take the care to identify which of A/B/C/D they are and code
> > them appropriately, and if the option is one of 1b) or 2b) make sure there 
> > is a
> > bug in the relevant project about the fact that we're having to override a
> > default. If the option is really a case of 1a) I'm not sure we want it
> > configurable at all.
> > 
> 
> I'm not convinced that anyone is in a position to judge that there is a
> single right answer - I know the values that are right for my deployments,
> but I'm not arrogant enough to say that they universally applicable.
> You only have to see the  wide range of Openstack Deployments presented
> at every summit to know that that there a lot of different use cases out
> there.   My worry is that if we try to have that debate in the context of
> a TripleO review, then we'll just spin between opinions rather than make
> the rapid progress towards getting the needed degree of configurability.
> So I would rule out 1a and 1b as options.Again it feels like any
> debate about this really belongs back in the projects that TripleO is
> deploying rather than TripleO itself.
> 

To me, TripleO isn't just OpenStack CI on a large scale. And TripleO isn't
"here's a generic set of tools, go build a cloud." At least, not yet,
likely some day.

To me, TripleO is first _a_ deployment of OpenStack. Meaning, we can
consider our first milestone reached when we have produced a consumable,
high scale deployment of OpenStack using OpenStack to operate it.

Contributions that do not push us toward that milestone have an impact
on it. At the very least they distract our reviewers, and at most, they
actively delay that milestone by adding complexity resulting in slower
velocity toward it.

I don't mean to say that they're not welcome, as I think they add
an incredible amount of value; we end up with a virtuous circle as
we gather users and contribution that is not directly related to our
immediate goals.

However, what I'm suggesting is that generic universal configurability of
all of the services isn't the immediate goal. Many of these configuration
knobs, while eventually necessary to widen the usability of TripleO in
a similar fashion to the rest of the OpenStack, will delay even the most
narrow initial usability of TripleO.

> I'm also not a great fan of considering a change in the default value
> (in either TripleO or the original project) as an alternative -whether
> the original default was a good choice or not there is a high chance
> that someone is relying on it - so any change in a default needs to go
> through a deprication period so that folks have a chance to explicitly
> configure to keep the setting if they need to.  That patterns reasonably
> well established in most of the projects, and as folks are now starting
> to use TripleO I think it needs to have the same discip

Re: [openstack-dev] [oslo] use of the "oslo" namespace package

2014-04-08 Thread Doug Hellmann
On Tue, Apr 8, 2014 at 6:12 AM, Victor Stinner
 wrote:
> Hi,
>
> Le mardi 8 avril 2014, 10:54:24 Julien Danjou a écrit :
>> On Mon, Apr 07 2014, Doug Hellmann wrote:
>> > We can avoid adding to the problem by putting each new library in its
>> > own package. We still want the Oslo name attached for libraries that
>> > are really only meant to be used by OpenStack projects, and so we need
>> > a naming convention. I'm not entirely happy with the "crammed
>> > together" approach for oslotest and oslosphinx. At one point Dims and
>> > I talked about using a prefix "oslo_" instead of just "oslo", so we
>> > would have "oslo_db", "oslo_i18n", etc. That's also a bit ugly,
>> > though. Opinions?
>>
>> Honestly, I think it'd be better to not have oslo at all and use
>> independent - if possible explicit - names for everything
>
> I agree.
>
> "oslo" name remembers me the "zope" fiasco. Except of zope.interfaces and the
> ZODB, I don't think that any Zope module was widely used outside Zope and it
> was a big fail. Because of that, Zope 3 restarted almost from scratch with
> small independent modules.
>
> "oslo" and "openstack.common" look more and more like Zope bloated modules.
> For example, Oslo Incubator has 44 dependencies. Who outside OpenStack would
> like to use a module which has 44 dependencies? Especially if you need a
> single module like timeutils.
>
> "nova.openstack.common.timeutils" name doesn't look correct: the Zen of Python
> says "Flat is better than nested": "xxx.timeutils" would be better. Same
> remark for "oslo.config.cfg" => "xxx.cfg".
>
> Choosing a name is hard. Dropping "oslo" requires to find a completly new 
> name.
> For example, "oslo.config" cannot be renamed to "config", this name is already
> used on PyPI. Same issue for "messaging" (and "message" is also reserved).

Right, that's the challenge.

I would like for us to continue to use the oslo prefix in some cases,
because it makes naming simple libraries easier but more importantly
because it is an indicator that we intend those libraries to be much
more useful to OpenStack projects than to anyone else. For projects
where that isn't the case (cliff, stevedore, taskflow, tooz, etc.) we
are already choosing "non-branded" names.

>
> "oslo.rootwrap" can be simply renamed to "rootwrap".
>
> Other suggestions:
>
> * olso.config => cmdconfig
> * olso.messaging => msqqueue
>
> Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] use of the "oslo" namespace package

2014-04-08 Thread Doug Hellmann
Donald linked to a pip bug later in this thread, so we might be able
to help by working on a fix. I haven't investigated that, but I assume
if it was easy the pypa team would have already fixed it.

When you saw the problem, were you running a current version of
devstack with Sean's change to install oslo libraries via "pip"
instead of "pip -e"?

Doug


On Mon, Apr 7, 2014 at 4:34 PM, Vishvananda Ishaya
 wrote:
> I dealt with this myself the other day and it was a huge pain. That said,
> changing all the packages seems like a nuclear option. Is there any way
> we could change python that would make it smarter about searching multiple
> locations for namespace packages?
>
> Vish
>
> On Apr 7, 2014, at 12:24 PM, Doug Hellmann  
> wrote:
>
>> Some of the production Oslo libraries are currently being installed
>> into the "oslo" namespace package (oslo.config, oslo.messaging,
>> oslo.vmware, oslo.rootwrap, and oslo.version). Over the course of the
>> last 2 release cycles, we have seen an increase in the number of
>> developers who end up with broken systems, where an oslo library (most
>> often oslo.config) cannot be imported. This is usually caused by
>> having one copy of a library installed normally (via a system package
>> or via pip) and another version in "development" (a.k.a., "editable")
>> mode as installed by devstack. The symptom is most often an error
>> about importing oslo.config, although that is almost never the library
>> causing the problem.
>>
>> We have already worked around this issue with the non-production
>> libraries by installing them into their own packages, without using
>> the namespace (oslotest, oslosphinx, etc.). We have also changed the
>> way packages are installed in nova's tox.ini, to force installation of
>> packages into the virtualenv (since exposing the global site-packages
>> was a common source of the problem). And very recently, Sean Dague
>> changed devstack to install the oslo libraries not in editable mode,
>> so that installing from source should replace any existing installed
>> version of the same library.
>>
>> However, the problems seem to persist, and so I think it's time to
>> revisit our decision to use a namespace package.
>>
>> After experimenting with non-namespace packages, I wasn't able to
>> reproduce the same import issues. I did find one case that may cause
>> us some trouble, though. Installing a package and then installing an
>> editable version from source leaves both installed and the editable
>> version appears first in the import path. That might cause surprising
>> issues if the source is older than the package, which happens when a
>> devstack system isn't updated regularly and a new library is released.
>> However, surprise due to having an old version of code should occur
>> less frequently than, and have less of an impact than, having a
>> completely broken set of oslo libraries.
>>
>> We can avoid adding to the problem by putting each new library in its
>> own package. We still want the Oslo name attached for libraries that
>> are really only meant to be used by OpenStack projects, and so we need
>> a naming convention. I'm not entirely happy with the "crammed
>> together" approach for oslotest and oslosphinx. At one point Dims and
>> I talked about using a prefix "oslo_" instead of just "oslo", so we
>> would have "oslo_db", "oslo_i18n", etc. That's also a bit ugly,
>> though. Opinions?
>>
>> Given the number of problems we have now (I help about 1 dev per week
>> unbreak their system), I think we should also consider renaming the
>> existing libraries to not use the namespace package. That isn't a
>> trivial change, since it will mean updating every consumer as well as
>> the packaging done by distros. If we do decide to move them, I will
>> need someone to help put together a migration plan. Does anyone want
>> to volunteer to work on that?
>>
>> Before we make any changes, it would be good to know how bad this
>> problem still is. Do developers still see issues on clean systems, or
>> are all of the problems related to updating devstack boxes? Are people
>> figuring out how to fix or work around the situation on their own? Can
>> we make devstack more aggressive about deleting oslo libraries before
>> re-installing them? Are there other changes we can make that would be
>> less invasive?
>>
>> Doug
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] use of the "oslo" namespace package

2014-04-08 Thread Doug Hellmann
On Tue, Apr 8, 2014 at 3:28 AM, Mark McLoughlin  wrote:
> On Mon, 2014-04-07 at 15:24 -0400, Doug Hellmann wrote:
>> We can avoid adding to the problem by putting each new library in its
>> own package. We still want the Oslo name attached for libraries that
>> are really only meant to be used by OpenStack projects, and so we need
>> a naming convention. I'm not entirely happy with the "crammed
>> together" approach for oslotest and oslosphinx. At one point Dims and
>> I talked about using a prefix "oslo_" instead of just "oslo", so we
>> would have "oslo_db", "oslo_i18n", etc. That's also a bit ugly,
>> though. Opinions?
>
> Uggh :)

Indeed. I'm not even allowed to name pets here at home, so if someone
else wants to propose a standard please do. :-)

>
>> Given the number of problems we have now (I help about 1 dev per week
>> unbreak their system),
>
> I've seen you do this - kudos on your patience.
>
>>  I think we should also consider renaming the
>> existing libraries to not use the namespace package. That isn't a
>> trivial change, since it will mean updating every consumer as well as
>> the packaging done by distros. If we do decide to move them, I will
>> need someone to help put together a migration plan. Does anyone want
>> to volunteer to work on that?
>
> One thing to note for any migration plan on this - we should use a new
> pip package name for the new version so people with e.g.
>
>oslo.config>=1.2.0
>
> don't automatically get updated to a version which has the code in a
> different place. You should need to change to e.g.
>
>   osloconfig>=1.4.0

Yes, good point.

>
>> Before we make any changes, it would be good to know how bad this
>> problem still is. Do developers still see issues on clean systems, or
>> are all of the problems related to updating devstack boxes? Are people
>> figuring out how to fix or work around the situation on their own? Can
>> we make devstack more aggressive about deleting oslo libraries before
>> re-installing them? Are there other changes we can make that would be
>> less invasive?
>
> I don't have any great insight, but hope we can figure something out.
> It's crazy to think that even though namespace packages appear to work
> pretty well initially, it might end up being so unworkable we would need
> to switch.
>
> Mark.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Neutron] Networking Discussions last week

2014-04-08 Thread Mike Scherbakov
Great, thanks Assaf.

I will keep following it. I've added a link to this bp on this page:
https://wiki.openstack.org/wiki/NovaNeutronGapHighlights#Multi-Host, might
help people to get the status.


On Mon, Apr 7, 2014 at 11:37 AM, Assaf Muller  wrote:

>
>
> - Original Message -
> > Hi all,
> > we had a number of discussions last week in Moscow, with participation of
> > guys from Russia, Ukraine and Poland.
> > That was a great time!! Thanks everyone who participated.
> >
> > Special thanks to Przemek for great preparations, including the
> following:
> >
> https://docs.google.com/a/mirantis.com/presentation/d/115vCujjWoQ0cLKgVclV59_y1sLDhn2zwjxEDmLYsTzI/edit#slide=id.p
> >
> > I've searched over blueprints which require update after meetings:
> > https://blueprints.launchpad.net/fuel/+spec/multiple-cluster-networks
> > https://blueprints.launchpad.net/fuel/+spec/fuel-multiple-l3-agents
> > https://blueprints.launchpad.net/fuel/+spec/fuel-storage-networks
> > https://blueprints.launchpad.net/fuel/+spec/separate-public-floating
> > https://blueprints.launchpad.net/fuel/+spec/advanced-networking
> >
> > We will need to create one for UI.
> >
> > Neutron blueprints which are in the interest of large and thus complex
> > deployments, with the requirements of scalability and high availability:
> > https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
> > https://blueprints.launchpad.net/neutron/+spec/quantum-multihost
> >
> > The last one was rejected... there is might be another way of achieving
> same
> > use cases? Use case, I think, was explained in great details here:
> > https://wiki.openstack.org/wiki/NovaNeutronGapHighlights
> > Any thoughts on this?
> >
>
> https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr
> This is the up the date blueprint, called "Distributed virtual
> router", or DVR. It's in early implementation reviews and is
> targeted for the Juno release.
>
> > Thanks,
> > --
> > Mike Scherbakov
> > #mihgen
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Enabling ServerGroup filters by default (was RE: [nova] Server Groups are not an optional element, bug or feature ?)

2014-04-08 Thread Jay Lau
2014-04-08 20:08 GMT+08:00 Russell Bryant :

> On 04/08/2014 06:16 AM, Day, Phil wrote:
> >> https://bugs.launchpad.net/nova/+bug/1303983
> >>
> >> --
> >> Russell Bryant
> >
> > Wow - was there really a need to get that change merged within 12 hours
> and before others had a chance to review and comment on it ?
>
> It was targeted against RC2 which we're trying to get out ASAP.  The
> change is harmless.
>
> > I see someone has already queried (post the merge) if there isn't a
> performance impact.
>
> The commit message indicates that when the API is not used, the
> scheduler filters are a no-op.  There is no noticable performance impact.
>
Thanks Russell, I asked the performance question in the gerrit review. Just
checked the logic again and did not found any potential performance issue.

>
> > I've raised this point before - but apart from non-urgent security fixes
> shouldn't there be a minimum review period to make sure that all relevant
> feedback can be given ?
>
> Separate topic, but no, I do not think there should be any rules on
> this.  I think in the majority of cases, people do the right thing.
>
> In this case, the patch was incredibly trivial and has no performance
> impact, so I don't see anything wrong.
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Operators & Design Summit ideas for Atlanta

2014-04-08 Thread Matt Van Winkle
It would be incredibly useful to get some of the "packagers" into this
conversation too.

Matt

On 4/8/14 4:51 AM, "Steven Hardy"  wrote:

>On Wed, Apr 02, 2014 at 08:24:00AM -0500, Dolph Mathews wrote:
>> On Mon, Mar 31, 2014 at 10:40 PM, Adam Young  wrote:
>> 
>> > On 03/28/2014 03:01 AM, Tom Fifield wrote:
>> >
>> >> Thanks to those projects that responded. I've proposed sessions in
>>swift,
>> >> ceilometer, tripleO and horizon.
>> >>
>> >
>> >
>> > Keystone would also be interested in user feedback, of course.
>> 
>> 
>> Crossing openstack-dev threads [1] here, gathering feedback on proposed
>> deprecations would be a great topic for such a session.
>> 
>> [1]
>> 
>>http://lists.openstack.org/pipermail/openstack-dev/2014-April/031652.html
>
>+1, I think a cross-project session on deprecation strategy/process would
>be
>hugely beneficial, particularly if we can solicit feedback from operators
>and deployers at the same time to agree a workable process.
>
>Steve
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder: Whats the way to do cleanup during service shutdown / restart ?

2014-04-08 Thread Duncan Thomas
Certainly adding an explicit shutdown or terminate call to the driver
seems reasonable - a blueprint to this effect would be welcome.

On 7 April 2014 06:13, Deepak Shetty  wrote:
> To add:
> I was looking at Nova code and it seems there is a framework for cleanup
> using the terminate calls.. IIUC this works as libvirt calls terminate on
> Nova instance when the VM is shutting down/destroying, hence terminate seems
> to be a good place to do cleanup on Nova side.. something similar is missing
> on Cinder side and __del__ way of cleanup isn't working as I posted above.
>
>
> On Mon, Apr 7, 2014 at 10:24 AM, Deepak Shetty  wrote:
>>
>> Duncan,
>> Thanks for your response. Tho' i agree to what you said.. I am still
>> trying to understand why i see what i see .. i.e. why the base class
>> variables (_mount_shared) shows up empty in __del__
>> I am assuming here that the obj is not completely gone/deleted, so its
>> vars must still be in scope and valid.. but debug prints suggests the
>> otherwise :(
>>
>>
>> On Sun, Apr 6, 2014 at 12:07 PM, Duncan Thomas 
>> wrote:
>>>
>>> I'm not yet sure of the right way to do cleanup on shutdown, but any
>>> driver should do as much checking as possible on startup - the service
>>> might not have gone down cleanly (kill -9, SEGFAULT, etc), or
>>> something might have gone wrong during clean shutdown. The driver
>>> coming up should therefore not make any assumptions it doesn't
>>> absolutely have to, but rather should check and attempt cleanup
>>> itself, on startup.
>>>
>>> On 3 April 2014 15:14, Deepak Shetty  wrote:
>>> >
>>> > Hi,
>>> > I am looking to umount the glsuterfs shares that are mounted as
>>> > part of
>>> > gluster driver, when c-vol is being restarted or Ctrl-C'ed (as in
>>> > devstack
>>> > env) or when c-vol service is being shutdown.
>>> >
>>> > I tried to use __del__ in GlusterfsDriver(nfs.RemoteFsDriver) and it
>>> > didn't
>>> > work
>>> >
>>> > def __del__(self):
>>> > LOG.info(_("DPKS: Inside __del__ Hurray!, shares=%s")%
>>> > self._mounted_shares)
>>> > for share in self._mounted_shares:
>>> > mount_path = self._get_mount_point_for_share(share)
>>> > command = ['umount', mount_path]
>>> > self._do_umount(command, True, share)
>>> >
>>> > self._mounted_shares is defined in the base class (RemoteFsDriver)
>>> >
>>> > ^C2014-04-03 13:29:55.547 INFO cinder.openstack.common.service [-]
>>> > Caught
>>> > SIGINT, stopping children
>>> > 2014-04-03 13:29:55.548 INFO cinder.openstack.common.service [-] Caught
>>> > SIGTERM, exiting
>>> > 2014-04-03 13:29:55.550 INFO cinder.openstack.common.service [-] Caught
>>> > SIGTERM, exiting
>>> > 2014-04-03 13:29:55.560 INFO cinder.openstack.common.service [-]
>>> > Waiting on
>>> > 2 children to exit
>>> > 2014-04-03 13:29:55.561 INFO cinder.openstack.common.service [-] Child
>>> > 30185
>>> > exited with status 1
>>> > 2014-04-03 13:29:55.562 INFO cinder.volume.drivers.glusterfs [-] DPKS:
>>> > Inside __del__ Hurray!, shares=[]
>>> > 2014-04-03 13:29:55.563 INFO cinder.openstack.common.service [-] Child
>>> > 30186
>>> > exited with status 1
>>> > Exception TypeError: "'NoneType' object is not callable" in >> > method
>>> > GlusterfsDriver.__del__ of
>>> > >> > object at 0x2777ed0>> ignored
>>> > [stack@devstack-vm tempest]$
>>> >
>>> > So the _mounted_shares is empty ([]) which isn't true since I have 2
>>> > glsuterfs shares mounted and when i print _mounted_shares in other
>>> > parts of
>>> > code, it does show me the right thing.. as below...
>>> >
>>> > From volume/drivers/glusterfs.py @ line 1062:
>>> > LOG.debug(_('Available shares: %s') % self._mounted_shares)
>>> >
>>> > which dumps the debugprint  as below...
>>> >
>>> > 2014-04-03 13:29:45.414 DEBUG cinder.volume.drivers.glusterfs
>>> > [req-2cf69316-cc42-403a-96f1-90e8e77375aa None None] Available shares:
>>> > [u'devstack-vm.localdomain:/gvol1', u'devstack-vm.localdomain:/gvol1']
>>> > from
>>> > (pid=30185) _ensure_shares_mounted
>>> > /opt/stack/cinder/cinder/volume/drivers/glusterfs.py:1061
>>> >
>>> > This brings in few Qs ( I am usign devstack env) ...
>>> >
>>> > 1) Is __del__ the right way to do cleanup for a cinder driver ? I have
>>> > 2
>>> > gluster backends setup, hence 2 cinder-volume instances, but i see
>>> > __del__
>>> > being called once only (as per above debug prints)
>>> > 2) I tried atexit and registering a function to do the cleanup.
>>> > Ctrl-C'ing
>>> > c-vol (from screen ) gives the same issue.. shares is empty ([]), but
>>> > this
>>> > time i see that my atexit handler called twice (once for each backend)
>>> > 3) In general, whats the right way to do cleanup inside cinder volume
>>> > driver
>>> > when a service is going down or being restarted ?
>>> > 4) The solution should work in both devstack (ctrl-c to shutdown c-vol
>>> > service) and production (where we do service restart c-vol)
>>> >
>>> > Would appreciate a response
>>> >
>>> > thanx,
>>> > 

Re: [openstack-dev] [Ironic][Agent]

2014-04-08 Thread Dickson, Mike (HP Servers)


From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
Sent: Tuesday, April 08, 2014 9:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic][Agent]


Guys, thank you very much for your comments,

I thought a lot about why we need to be so limited in IPA use cases. Now it 
much clearer for me. Indeed, having some kind of agent running inside host OS 
is not what many people want to see. And now I'd rather agree with that.

But there are still some questions which are difficult to answer for me.
0) There are a plenty of old hardware which does not have IPMI/ILO at all. How 
Ironic is supposed to power them off and on? Ssh? But Ironic is not supposed to 
interact with host OS.

I’m not sure about this yet. I’m inclined to say “we don’t support such 
hardware”, at least in the short-term. How does Ironic handle hardware without 
a power management interface today?

[Dickson, Mike (HP Servers)] I’d be inclined to agree.  Server class hardware 
would have a BMC of some sort.   I suppose you could alternatively do a driver 
for a smart PDU and let it control power brute force.  But irregardless  I 
don’t think relying on OS level power control is enough so essentially any 
“server” without some sort of power control outside of the OS is sort of a non 
starter.

1) We agreed that Ironic is that place where we can store hardware info 
('extra' field in node model). But many modern hardware configurations support 
hot pluggable hard drives, CPUs, and even memory. How Ironic will know that 
hardware configuration is changed? Does it need to know about hardware changes 
at all? Is it supposed that some monitoring agent (NOT ironic agent) will be 
used for that? But if we already have discovering extension in Ironic agent, 
then it sounds rational to use this extension for monitoring as well. Right?

I believe that hardware changes should not be made while an instance is 
deployed to a node (except maybe swapping a dead stick of RAM or something). If 
a user wants a node with more RAM (for example), they should provision a new 
node and destroy the old one, just like they would do with VMs provisioned by 
Nova.

[Dickson, Mike (HP Servers)] I think this would depend on the driver in use.  
iLO for instance can get many hardware details real time and I don’t see a 
reason why a driver couldn’t support that.  Maybe some attributes that describe 
the drivers capabilities?  In the absence of that you could run a ram disk and 
inventory the server on reboots. It wouldn’t catch hot plug changes until a 
reboot occurred of course.

Mike

2) When I deal with some kind of hypervisor, I can always use 'virsh list 
--all' command in order to know which nodes are running and which aren't. How 
am I supposed to know which nodes are still alive in case of Ironic? IPMI? 
Again IPMI is not always available. And if IPMI is available, then why do we 
need heartbeat in Ironic agent?

Every power driver today has some sort of “power status” command that Ironic 
relies on to tell if the node is alive, and I think we can continue to rely on 
this. We have a heartbeat in the agent to ensure that the agent process is 
still alive and reachable, as the agent might run for a long time before an 
instance is deployed to the node, and bugs happen.

Is that helpful?

// jim



Vladimir Kozhukalov

On Fri, Apr 4, 2014 at 9:46 PM, Ezra Silvera 
mailto:e...@il.ibm.com>> wrote:
> Ironic's responsibility ends where the host OS begins. Ironic is a bare metal 
> provisioning service, not a configuration management service.
I agree with the above, but just to clarify I would say that Ironic shouldn't 
*interact*  with the host OS once it booted. Obviously it can still perform BM 
tasks underneath the OS (while it's up and running)  if needed (e.g., force 
shutdown through IPMI, etc..)





Ezra



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] heat is not present in keystone service-list

2014-04-08 Thread Peeyush Gupta
Hi all,

I have been trying to install heat with devstack. As shown here 
http://docs.openstack.org/developer/heat/getting_started/on_devstack.html 

I added the IMAGE_URLS to the locarc file. Then I ran unstack.sh and then 
stack.sh. Now, when I run heat stack-list, I get the following error:

    $ heat stack-list 
    publicURL endpoint for orchestration not found

I found that some people got this error because of wrong endpoint in keystone 
service-list, but in my output there is no heat!

    $ keystone service-list
    
+--+--+---+---+
    |                id                |   name   |    type   |        
description        |
    
+--+--+---+---+
    | 808b93d2008c48f69d42ae7555c27b6f |  cinder  |   volume  |   Cinder Volume 
Service   |
    | f57c596db43443d7975d890d9f0f4941 | cinderv2 |  volumev2 |  Cinder Volume 
Service V2 |
    | d8567205287a4072a489a89959801629 |   ec2    |    ec2    |  EC2 
Compatibility Layer  |
    | 9064dc9d626045179887186d0b3647d0 |  glance  |   image   |    Glance Image 
Service   |
    | 70cf29f8ceed48d0a39ba7e29481636d | keystone |  identity | Keystone 
Identity Service |
    | b6cca1393f814637bbb8f95f658ff70a |   nova   |  compute  |    Nova Compute 
Service   |
    | 0af6de1208a14d259006f86000d33f0d |  novav3  | computev3 |  Nova Compute 
Service V3  |
    | b170b6b212ae4843b3a6987c546bc640 |    s3    |     s3    |             S3  
          |
    
+--+--+---+---+

Please help me resolve this error.
 
Thanks,
~Peeyush Gupta___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] run tests using testr on Python 3

2014-04-08 Thread Doug Hellmann
How many of the modules affected by this are slated to be moved to
their own libraries during Juno? I would expect those libraries to be
using testr, so can we just make the change as we graduate them?

Doug

On Tue, Apr 8, 2014 at 9:12 AM, victor stinner
 wrote:
> Hi,
>
> Olso Incubator runs tests using testr on Python 2, but it uses nosetests on 
> Python 3 to only run a subset of the test suite (modules and tests ported to 
> Python 3). In my latest patch for Oslo Incubator (gettext), Ben Nemec wrote:
> "I think we could get around the nose issue by using a testr regex to filter 
> the tests we run for py33 (like I did in py27 for running everything but rpc 
> in parallel), but that's outside the scope of this change so +2."
>
> I tried to run Olso Incubator tests with testr on Python 3, but testr fails 
> to load "openstack.common.eventlet_backdoor", because eventlet module is not 
> installed (eventlet is not Python 3 compatible yet). If I understood 
> correctly, testr first loads all modules and then filter the tests to run 
> using the regex passed on the command line. If I'm correct, I don't see right 
> now how to run Olso Incubator tests with testr on Python 3. But I don't know 
> well the Testr tool, so I missed probably an option.
>
> I would like to use testr because many Olso Incubator tests use testscenarios 
> (which doesn't work with nosetests).
>
> By the way, would it be possible to fix nosetests to use testscenarios?
>
> Victor
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-08 Thread Robert Collins
On 8 April 2014 11:51, Dan Prince  wrote:
>
>
> - Original Message -
>> From: "Robert Collins" 
>> To: "OpenStack Development Mailing List" 
>> Sent: Monday, April 7, 2014 4:00:30 PM
>> Subject: [openstack-dev] [TripleO] config options, defaults, oh my!
>>
>> So one interesting thing from the influx of new reviews is lots of
>> patches exposing all the various plumbing bits of OpenStack. This is
>> good in some ways (yay, we can configure more stuff), but in some ways
>> its kindof odd - like - its not clear when
>> https://review.openstack.org/#/c/83122/ is needed.
>>
>> I'm keen to expose things that are really needed, but i'm not sure
>> that /all/ options are needed - what do folk think?
>
> I think we can learn much from some of the more mature configuration 
> management tools in the community on this front. Using puppet as an example 
> here (although I'm sure other tools may do similar things as well)... Take 
> configuration of the Nova API server. There is a direct configuration 
> parameter for 'neutron_metadata_proxy_shared_secret' in the Puppet nova::api 
> class. This parameter is exposed in the class (sort of the equivalent of a 
> TripleO element) directly because it is convenient and many users may want to 
> customize the value. There are however hundreds of Nova config options and 
> most of them aren't exposed as parameters in the various Nova puppet classes. 
> For these it is possible to define a nova_config resource to configure *any* 
> nova.conf parameter in an ad hoc style for your own installation tuning 
> purposes.
>
> I could see us using a similar model in TripleO where our elements support 
> configuring common config elements directly, but we also allow people to tune 
> extra "undocumented" options for their own use. There is always going to be a 
> need for this as people need to tune things for their own installations with 
> options that may not be appropriate for the common set of elements.
>
> Standardizing this mechanism across many of the OpenStack service elements 
> would also make a lot of sense. Today we have this for Nova:
>
> nova:
>   verbose: False
> - Print more verbose output (set logging level to INFO instead of default 
> WARNING level).
>   debug: False
> - Print debugging output (set logging level to DEBUG instead of default 
> WARNING level).
>   baremetal:
> pxe_deploy_timeout: "1200"
>   .
>
> I could see us adding a generic mechanism like this to overlay with the 
> existing (documented) data structure:
>
> nova:
>config:
>default.compute_manager: 
> ironic.nova.compute.manager.ClusterComputeManager
>cells.driver: nova.cells.rpc_driver.CellsRPCDriver
>
> And in this manner a user might be able to add *any* supported config param 
> to the element.

I like this - something like

nova:
  config:
- section: default
  values:
- option: 'compute_manager'
  value: 'ironic.nova.compute.manager.ClusterComputeManager'
- section: cells
  values:
- option: 'driver'
  value: nova.cells.rpc_driver.CellsRPCDriver


should be able to represent most? all (it can handle repeating items)
oslo.config settings and render it easily:

{{#config}}
{{#comment}} repeats for each section {{/comment}}
[{{section}}]
{{#values}}
{{option}}={{value}}
{{/values}}
{{/config}}

>> Also, some things
>> really should be higher order operations - like the neutron callback
>> to nova right - that should be either set to timeout in nova &
>> configured in neutron, *or* set in both sides appropriately, never
>> one-half or the other.
>>
>> I think we need to sort out our approach here to be systematic quite
>> quickly to deal with these reviews.
>
> I totally agree. I was also planning to email the list about this very issue 
> this week :) My email subject was going to be "TripleO templates... an 
> upstream maintenance problem".
>
> For the existing reviews today I think we should be somewhat selective about 
> what parameters we expose as top level within the elements. That said we are 
> missing some rather fundamental features to allow users to configure 
> "undocumented" parameters as well. So we need to solve this problem quickly 
> because there are certainly some configuration corner that users will need.
>
> As is today we are missing some rather fundamental features in 
> os-apply-config and the elements to be able to pull this off. What we really 
> need is a generic INI style template generator. Or perhaps we could use 
> something like augeus or even devstacks simple ini editing functions to pull 
> this off. In any case the idea would be that we allow users to inject their 
> own undocumented config parameters into the various service config files. Or 
> perhaps we could auto-generate mustache templates based off of the upstream 
> sample config files. Many approaches would work here I think...

I agree that there are many approaches here - I think the sketch above
may be sufficien

Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-08 Thread Robert Collins
On 9 April 2014 00:48, Chris Jones  wrote:
> Hi
>
>> On 8 Apr 2014, at 11:20, Sean Dague  wrote:
>>
>> I think Phil is dead on. I'll also share the devstack experience here.
>> Until we provided the way for arbitrary pass through we were basically
>> getting a few patches every week that were "let me configure this
>> variable in the configs" over and over again.
>
> +1
>
> We can't be in the business of prescribing what users can/can't configure in 
> the daemons they are using us to deploy.

I think this points to a misapprehension about what I was saying that
I think other folk in the thread have had too - and I'm not going to
try to reply to each individually :).

The question isn't about prescribing limits - of course anyone can
configure anything.

The question is /how/. Do we model it? Do we punt and pass everything
through? Do we model some stuff?

-lots- of complexity in these setups is tied entirely to 'all servers
running X need Y' style questions which is where Heats value /should/
shine through - but we don't want every deployer to have to learn Heat
on day one of their deploy - so we need a way to:

 - deliver a great out of the box experience
 - let higher order configuration - cluster aware - be done well
 - whilst also surfacing the plumbing as needed.

Right now we have no differentiation between plumbing exposure and
semantically modelled configuration, and I think thats a problem.

I *loved* Dan's answer :)

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] use of the "oslo" namespace package

2014-04-08 Thread Doug Hellmann
On Tue, Apr 8, 2014 at 10:01 AM, Julien Danjou  wrote:
> On Tue, Apr 08 2014, Doug Hellmann wrote:
>
>> I would like for us to continue to use the oslo prefix in some cases,
>> because it makes naming simple libraries easier but more importantly
>> because it is an indicator that we intend those libraries to be much
>> more useful to OpenStack projects than to anyone else. For projects
>> where that isn't the case (cliff, stevedore, taskflow, tooz, etc.) we
>> are already choosing "non-branded" names.
>
> I understand that, but can you point really a function that is
> so-damn-OpenStack-specific that if somebody would stumble upon it it
> would be like "p, what the hell!"? I don't think so. :)

A good bit of what we have is legacy code from choices made early in
nova's history. I wouldn't expect anyone else to use oslo.config
(ConfigObj is simpler) or oslo.rootwrap (sudo). The log configuration,
ContextAdapter, and ContextFormatter stuff we have in the log module
is fairly specific to setting up logging for our apps. Someone else
*could* use it, but I don't know why they would.

Some of it is newer, but still meant to share code between OpenStack
projects. The VMware team says oslo.vmware isn't useful outside of
OpenStack. The scheduler code in the incubator is another example of
this, as are some of the wrappers around things like the datetime and
uuid modules.

So it's fair to say we should look at each library as it graduates and
decide how to treat it, but I think the default for most of what we
have in the incubator right now is going to be an oslo.* library.

Doug

>
> --
> Julien Danjou
> // Free Software hacker
> // http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-08 Thread Khanh-Toan Tran


> -Message d'origine-
> De : Jay Pipes [mailto:jaypi...@gmail.com]
> Envoyé : mardi 8 avril 2014 15:25
> À : openstack-dev@lists.openstack.org
> Objet : Re: [openstack-dev] [Nova] Hosts within two Availability Zones : 
> possible
> or not ?
>
> On Tue, 2014-04-08 at 10:49 +, Day, Phil wrote:
> > On a large cloud you’re protect against this to some extent if the
> > number of servers is >> number of instances in the quota.
> >
> > However it does feel that there are a couple of things missing to
> > really provide some better protection:
> >
> > - A quota value on the maximum size of a server group
> > - A policy setting so that the ability to use service-groups
> > can be controlled on a per project basis
>
> Alternately, we could just have the affinity filters serve as weighting 
> filters
> instead of returning NoValidHosts.
>
> That way, a request containing an affinity hint would cause the scheduler 
> to
> prefer placing the new VM near (or not-near) other instances in the server
> group, but if no hosts exist that meet that criteria, the filter simply 
> finds a host
> with the most (or fewest, in case of anti-affinity) instances that meet 
> the affinity
> criteria.
>
> Best,
> -jay
>

The filters guarantee the desired effect, while the weighers just give the 
preference. Thus it makes sense to have AntiAffinity as a filter. Otherwise 
what is it good for if users do not know if their anti-affiniti-ed VMs are 
hosted in different hosts. I prefer the idea of anti-affinity quota. May 
propose that.

> > From: Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com]
> > Sent: 08 April 2014 11:32
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Nova] Hosts within two Availability
> > Zones : possible or not ?
> >
> > “Abusive usage” : If user can request anti-affinity VMs, then why
> > doesn’t he uses that? This will result in user constantly requesting
> > all his VMs being in the same anti-affinity group. This makes
> > scheduler choose one physical host per VM. This will quickly flood the
> > infrastructure and mess up with the objective of admin (e.g.
> > Consolidation that regroup VM instead of spreading, spared hosts,
> > etc) ; at some time it will be reported back that there is no host
> > available, which appears as a bad experience for user.
> >
> >
> >
> >
> >
> > De : Ian Wells [mailto:ijw.ubu...@cack.org.uk] Envoyé : mardi 8 avril
> > 2014 01:02 À : OpenStack Development Mailing List (not for usage
> > questions) Objet : Re: [openstack-dev] [Nova] Hosts within two
> > Availability Zones : possible or not ?
> >
> >
> >
> >
> > On 3 April 2014 08:21, Khanh-Toan Tran 
> > wrote:
> >
> > Otherwise we cannot provide redundancy to client except using
> > Region which
> > is dedicated infrastructure and networked separated and
> > anti-affinity
> > filter which IMO is not pragmatic as it has tendency of
> > abusive usage.
> >
> >
> >
> >
> >
> > I'm sorry, could you explain what you mean here by 'abusive usage'?
> >
> >
> >
> > --
> >
> >
> > Ian.
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra]Requesting consideration of httmock package for test-requirements in Juno

2014-04-08 Thread Paul Michali (pcm)
Reposting this, after discussing with Sean Dague…

For background, I have developed a REST client lib to talk to a H/W device with 
REST server for VPNaaS in Neutron. To support unit testing of this, I created a 
UT module and a mock REST server module and used the httmock package. I found 
it easy to use, and was able to easily create a sub-class of my UT to run the 
same test cases with real H/W, instead of the mock REST server. See the 
original email below, for links of the UT and REST mock to see how I used it.


I created a bug under requirements, to propose adding httmock to the 
test-requirements. Sean mentioned that there is an existing mock package, 
called httpretty, which I found is used in keystone client UTs), and should 
petition to see if httmock should replace httpretty, since the two appear to 
overlap in functionality.

I found this link, with a brief comparison of the two: 
http://marekbrzoska.wordpress.com/2013/08/28/mocking-http-requests-in-python/

So… I’m wondering if the community is interested in adopting this package (with 
the goal of deprecating the httpretty package). Otherwise, I will work on 
reworking the UT code I have to try to use httpretty.

Would be interested in peoples’ thoughts, especially those who have worked with 
httpretty.

Thanks in advance!


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Apr 4, 2014, at 10:44 AM, Paul Michali (pcm) 
mailto:p...@cisco.com>> wrote:

I’d like to get this added to the test-requirements for Neutron. It is a very 
flexible HTTP mock module that works with the Requests package. It is a 
decorator that wraps the Request’s send() method and allows easy mocking of 
responses, etc (w/o using a web server).

The bug is: https://bugs.launchpad.net/neutron/+bug/1282855

Initially I had requested both httmock and newer requests, but was requested to 
separate them, so this is to target httmock as it is more important (to me :) 
to get approval,


The review request is: https://review.openstack.org/#/c/75296/

An example of code that would use this:

https://github.com/openstack/neutron/blob/master/neutron/tests/unit/services/vpn/device_drivers/notest_cisco_csr_rest.py
https://github.com/openstack/neutron/blob/master/neutron/tests/unit/services/vpn/device_drivers/cisco_csr_mock.py

Looking forward to hearing whether or not we can include this package into Juno.

Thanks in advance!


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Split Oslo Incubator?

2014-04-08 Thread Doug Hellmann
On Tue, Apr 8, 2014 at 6:35 AM, Victor Stinner
 wrote:
> (Follow-up of the "[olso] use of the "oslo" namespace package" thread)
>
> Hi,
>
> The openstack.common module also known as "Oslo Incubator" or "OpenStack
> Common Libraries" has 44 dependencies. IMO we reach a point where it became
> too huge. Would it be possible to split it into smaller parts and distribute
> it on PyPI with a stable API? I don't know Olso Incubator enough to suggest
> the best granularity. A hint can be the number of dependencies.

Yes, as others have pointed out we will be doing this in Juno. See
https://wiki.openstack.org/wiki/Oslo/JunoGraduationPlans

>
> Sharing code is a good idea, but now we have SQLAchmey, WSGI, cryptographic,
> RPC, etc. in the same module. Who needs all these features at once? Olso
> Incubator must be usable outside OpenStack.

While I agree that we should be thinking about code reuse outside of
OpenStack, it is perfectly OK to discover that we have a module other
OpenStack projects want to use but that won't (or shouldn't) be used
by anyone else.

>
>
> Currently, Oslo Incubator is installed and updated manually using a
> "update.sh" script which copy ".py" files and replace "openstack.common" with
> "nova.openstack.common" (where nova is the name of the project where Oslo
> Incubator is installed).
>
> I guess that update.sh was written to solve the two following points, tell me
> if I'm wrong:
>
>  - unstable API: the code changes too often, whereas users don't want to
> update their code regulary. Nova has maybe an old version of Olso Incubator
> because of that.
>
>  - only copy a few files to avoid a lot of dependencies and copy useless files
>
> Smaller modules should solve these issues. They should be used as module:
> installed system-wide, not copied in each project. So fixing a bug would only
> require a single change, without having to "synchronize" each project.
>
>
> Yesterday, I proposed to add a new time_monotonic() function to the timeutils
> module. We asked me to enhance existing modules (like Monotime).
>
> We should now maybe move code from Oslo Incubator to "upstream" projects. For
> example, timeutils extends the iso8601 module. We should maybe contribute to
> this project and replace usage of timeutils with directy call to iso8601?

That may make sense for your new function. I think there are some
other things in timeutils that don't make sense to upstream. The
isotime() and parse_isotime() functions are relatively simple wrappers
around existing functions that give us consistent timestamps across
projects, for example. Those are useful to us as OpenStack developers,
but I'm not sure they're useful to anyone else as written.

Doug

>
> Victor
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-08 Thread Steve Gordon
- Original Message -
> On Tue, 2014-04-08 at 10:49 +, Day, Phil wrote:
> > On a large cloud you’re protect against this to some extent if the
> > number of servers is >> number of instances in the quota.
> > 
> > However it does feel that there are a couple of things missing to
> > really provide some better protection:
> > 
> > - A quota value on the maximum size of a server group
> > - A policy setting so that the ability to use service-groups
> > can be controlled on a per project basis
> 
> Alternately, we could just have the affinity filters serve as weighting
> filters instead of returning NoValidHosts.
> 
> That way, a request containing an affinity hint would cause the
> scheduler to prefer placing the new VM near (or not-near) other
> instances in the server group, but if no hosts exist that meet that
> criteria, the filter simply finds a host with the most (or fewest, in
> case of anti-affinity) instances that meet the affinity criteria.
> 
> Best,
> -jay

This is often called "soft" affinity/anti-affinity (though admittedly typically 
in the context of CPU affinity), I had been independently mulling whether this 
would make sense as an additional policy for server groups. That said although 
it's a simple solution for the problem noted in this thread I don't think it's 
desirable to do this in as a replacement for the existing support and remove 
any ability to have "hard" affinity/anti-affinity - other.

Some users actually expect/demand an error if the affinity or anti-affinity 
requirements of the workload can't be met, perhaps this is a case where 
sensible default tunables are required and the operators that want/need to 
provide "hard" affinity/anti-affinity need to explicitly enable it?

-Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] heat is not present in keystone service-list

2014-04-08 Thread Pavlo Shchelokovskyy
Hi,

how fresh is your devstack? AFIK heat became enabled by default not so long
ago. Try pulling the latest devstack master before running stack.sh.

Best,
Pavlo.


On Tue, Apr 8, 2014 at 5:00 PM, Peeyush Gupta  wrote:

> Hi all,
>
> I have been trying to install heat with devstack. As shown here
> http://docs.openstack.org/developer/heat/getting_started/on_devstack.html
>
> I added the IMAGE_URLS to the locarc file. Then I ran unstack.sh and then
> stack.sh. Now, when I run heat stack-list, I get the following error:
>
> $ heat stack-list
> publicURL endpoint for orchestration not found
>
> I found that some people got this error because of wrong endpoint in
> keystone service-list, but in my output there is no heat!
>
> $ keystone service-list
>
> +--+--+---+---+
> |id|   name   |type   |
>  description|
>
> +--+--+---+---+
> | 808b93d2008c48f69d42ae7555c27b6f |  cinder  |   volume  |   Cinder
> Volume Service   |
> | f57c596db43443d7975d890d9f0f4941 | cinderv2 |  volumev2 |  Cinder
> Volume Service V2 |
> | d8567205287a4072a489a89959801629 |   ec2|ec2|  EC2
> Compatibility Layer  |
> | 9064dc9d626045179887186d0b3647d0 |  glance  |   image   |Glance
> Image Service   |
> | 70cf29f8ceed48d0a39ba7e29481636d | keystone |  identity | Keystone
> Identity Service |
> | b6cca1393f814637bbb8f95f658ff70a |   nova   |  compute  |Nova
> Compute Service   |
> | 0af6de1208a14d259006f86000d33f0d |  novav3  | computev3 |  Nova
> Compute Service V3  |
> | b170b6b212ae4843b3a6987c546bc640 |s3| s3|
>   S3|
>
> +--+--+---+---+
>
> Please help me resolve this error.
>
> Thanks,
> ~Peeyush Gupta
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] How to solve the cgit repository browser line number misalignment in Chrome

2014-04-08 Thread Doug Hellmann
Maybe those changes should be added to our cgit stylesheet?

Doug

On Mon, Apr 7, 2014 at 9:23 PM, Zhongyue Luo  wrote:
> Hi,
>
> I know I'm not the only person who had this problem so here's two simple
> steps to get the lines and line numbers aligned.
>
> 1. Install the stylebot extension
>
> https://chrome.google.com/extensions/detail/oiaejidbmkiecgbjeifoejpgmdaleoha
>
> 2. Click on the download icon to install the custom style for
> git.openstack.org
>
> http://stylebot.me/styles/5369
>
> Thanks!
>
> --
> Intel SSG/STO/DCST/CBE
> 880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
> China
> +862161166500
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-08 Thread Chris Friesen

On 04/08/2014 07:25 AM, Jay Pipes wrote:

On Tue, 2014-04-08 at 10:49 +, Day, Phil wrote:

On a large cloud you’re protect against this to some extent if the
number of servers is >> number of instances in the quota.

However it does feel that there are a couple of things missing to
really provide some better protection:

- A quota value on the maximum size of a server group
- A policy setting so that the ability to use service-groups
can be controlled on a per project basis


Alternately, we could just have the affinity filters serve as weighting
filters instead of returning NoValidHosts.

That way, a request containing an affinity hint would cause the
scheduler to prefer placing the new VM near (or not-near) other
instances in the server group, but if no hosts exist that meet that
criteria, the filter simply finds a host with the most (or fewest, in
case of anti-affinity) instances that meet the affinity criteria.


I'd be in favor of this.   I've actually been playing with an internal 
patch to do both of these things, though in my case I was just doing it 
via metadata on the group and a couple hacks in the scheduler and the 
compute node.


Basically I added a group_size metadata field and a "best_effort" flag 
to indicate whether we should error out or continue on if the policy 
can't be properly met.


Currently mine just falls back to the regular scheduler if it can't meet 
the policy, but I had been thinking about what it would take to do it 
like you suggest above, where we try to abide by the spirit of the 
policy even if we can't quite satisfy the letter of it.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack VM Import/Export

2014-04-08 Thread Brian Rosmaita
There's been some work in Glance on this already.  In addition to the BP Mark 
mentioned for import, please take a look at these so we don't duplicate efforts:

https://wiki.openstack.org/wiki/Glance-tasks-import
https://blueprints.launchpad.net/glance/+spec/new-download-workflow
https://wiki.openstack.org/wiki/Glance-tasks-export
https://wiki.openstack.org/wiki/Glance-tasks-api

The last one has links to previous mailing list discussion on this topic.

cheers,
brian


From: Aditya Thatte [aditya.that...@gmail.com]
Sent: Monday, April 07, 2014 12:42 PM
To: OpenStack Development Mailing List, (not for usage questions)
Subject: Re: [openstack-dev] OpenStack VM Import/Export


We are implementing that usecase. My talk is selected at the summit. Please do 
visit.
http://openstacksummitmay2014atlanta.sched.org/mobile/#session:c0d9f8aefb90f93cfc8fc66b67b8403d

On 07-Apr-2014 6:37 PM, "Mark Washenberger" 
mailto:mark.washenber...@markwash.net>> wrote:
Hi Saju,

VM imports are likely to show up in Glance under this blueprint: 
https://blueprints.launchpad.net/glance/+spec/new-upload-workflow

Cheers,
markwash


On Mon, Apr 7, 2014 at 12:06 AM, Saju M 
mailto:sajup...@gmail.com>> wrote:
Hi,

Amazon provides option to Import/Export VM.
http://aws.amazon.com/ec2/vm-import/

does OpenStack has same feature ?
Have anyone started to implement this in Openstack ?. If yes, Please point me 
to the blueprint. I would like to work on that.


Regards
Saju Madhavan
+91 09535134654

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-08 Thread Jay Dobies

I'm very wary of trying to make the decision in TripleO of what should and 
shouldn't be configurable in some other project.For sure the number of 
config options in Nova is a problem, and one that's been discussed many times 
at summits.   However I think you could also make the case/assumption for any 
service that the debate about having a config option has already been held 
within that service as part of the review that merged that option in the code - 
re-running the debate about whether something should be configurable via 
TripleO feels like some sort of policing function on configurability above and 
beyond what the experts in that service have already considered, and that 
doesn't feel right to me.


My general feeling is that I agree with this sentiment. In my experience 
on management tools, there's always someone who wants to turn the one 
knob I forgot to expose. And that's been on significantly simpler 
projects than OpenStack; the complexity and scale of the features means 
there's potentially a ton of tweaking to be done.


More generally, this starts to drift into the bigger question of what 
TripleO is. The notion of defaults or limiting configuration exposure is 
for prescriptive purposes. "You can change X because we think it's going 
to have a major impact." If we don't expose Y, it's because we're 
driving the user to not want to change it.


I've always assumed TripleO is very low-level. Put another way, 
non-prescriptive. It's not going to push an agenda that says you should 
be doing things a certain way, but rather gives you more than enough 
rope to hang yourself (just makes it easier).


The question of how to make things easier to grok for a new user lies in 
a different area. Either documentation (basic v. advanced user guide 
sort of thing) or potentially in the Tuskar GUI. More configuration 
options means Tuskar's life is more difficult, but to me, that's where 
we add in the notion of "You almost definitely want to configure these 
things, but if you're really insane you can look at this other set of 
stuff to configure."


So I think we need to have a way of specifying everything. And we need 
to have that way not kill us in the process. I like the proposed idea of 
an open-ended config area. It's us acknowledging that we're sitting on 
top of a dozen other projects. Admittedly, I don't fully understand 
Slagle's proposal, but the idea of pulling in samples from other 
projects and not making us acknowledge every configuration option is 
also appealing.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] use of the "oslo" namespace package

2014-04-08 Thread Julien Danjou
On Tue, Apr 08 2014, Doug Hellmann wrote:

> I would like for us to continue to use the oslo prefix in some cases,
> because it makes naming simple libraries easier but more importantly
> because it is an indicator that we intend those libraries to be much
> more useful to OpenStack projects than to anyone else. For projects
> where that isn't the case (cliff, stevedore, taskflow, tooz, etc.) we
> are already choosing "non-branded" names.

I understand that, but can you point really a function that is
so-damn-OpenStack-specific that if somebody would stumble upon it it
would be like "p, what the hell!"? I don't think so. :)

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V meeting canceled for today.

2014-04-08 Thread Peter Pouliot
Hi Everyone,

Individuals are travelling this week and therefore will need to postpone the 
Hyper-V discussion until next week.

p

Peter J. Pouliot CISSP
Sr. SDET OpenStack
Microsoft
New England Research & Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about modifying instance attribute(such as cpu-QoS, disk-QoS ) without shutdown the instance

2014-04-08 Thread Jay Pipes
On Tue, 2014-04-08 at 08:30 +, Zhangleiqiang (Trump) wrote:
> Hi, Stackers, 
> 
>   For Amazon, after calling ModifyInstanceAttribute API , the instance 
> must be stopped. 
> 
>   In fact, the hypervisor can online-adjust these attribute. But amzon 
> and openstack do not support it.
>   
>   So I want to know what are your advice about introducing the capability 
> of online adjusting these instance attribute?

What kind of attributes?

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues when running unit tests in OpenStack

2014-04-08 Thread Ben Nemec

On 04/08/2014 03:13 AM, victor stinner wrote:

Hi,

I have some issues when running unit tests in OpenStack. I would like to help, 
but I don't know where I should start and how I can fix these bugs. My use case 
is to run unit tests and rerun a single test if one or more tests failed. Well, 
it should be the most basic use case, no?


(1) First problem: if a Python module cannot be loaded, my terminal is flooded 
with a binary stream which looks like:

... 
tCase.test_deprecated_without_replacement\xd7\xe1\x06\xa1\xb3)\x01@l...@atests.unit.test_versionutils.DeprecatedTestCa
 ...

IMO it's a huge bug in the testr tool: "testr run" command should not write 
binary data into stdout. It makes development very hard.


(2) When a test fails, it's hard to find the command to rerun a single failing 
test.

Using the tool trial, I can just copy/paste the "FQDN" name of the failing test and run 
"trial FQDN". Example:

trial tests.unit.test_timeutils.TestIso8601Time.test_west_normalize

Using the tool nosetests, you have to add a column between the module and the 
method. Example:

nosetests tests.unit.test_timeutils:TestIso8601Time.test_west_normalize

Using tox, in most OpenStack projects, adding the name of the failing test to 
the tox command is usually ignored. I guess that it's an issue with tox.ini of 
the project? tox rerun the whole test suite which is usually very slow (it 
takes some minutes even on fast computer). Example:

tox -e py27 tests.unit.test_timeutils.TestIso8601Time.test_west_normalize


The way to do this that works in every project where I've tried it is to 
add a -- before the name of the test.  That way it will get passed 
straight to testr.  So your command above would become:


tox -e py27 -- tests.unit.test_timeutils.TestIso8601Time.test_west_normalize

(hopefully that isn't going to wrap)

Note that in a some projects (tempest and oslo-incubator) there are tox 
targets that already use testr regexes, so in those targets this syntax 
won't work because you can't specify two regexes, at least as far as I 
can tell.  Both of those projects have an "all" tox target that doesn't 
filter using a regex so you can pass the test name as above and it 
should work.  For oslo-incubator that should be going away once the rpc 
code is removed because rpc tests are the only reason it's filtering 
tests in the py27 target today.




I try sometimes to activate the virtualenv and then type:

testr run tests.unit.test_timeutils.TestIso8601Time.test_west_normalize

It usually fails for different reasons.

Example with python-swiftclient. I run unit tests using "tox -e py33. Some 
tests are failing. I enter the virtual environment and type the following command to 
rerun a failing test:

testr run tests.test_swiftclient.TestPutObject.test_unicode_ok

The test is not run again, no test is run. It's surprising because the same 
command works with Python 2. It's probably a bug in testr?



(3) testscenarios doesn't work with nosetests. It's annoying because for the 
reasons listed above, I prefer to run tests using nosetests. Why do we use 
testscenarios and not something else? Do we plan to support nosetests (and 
other Python test runners) for testscenarios?


It should be possible to switch back to testr for the py33 tests if we 
use the regex syntax for filtering out the tests that don't work in py3. 
 The rpc example is 
https://github.com/openstack/oslo-incubator/blob/master/tox.ini#L21 and 
if you look at what tests2skip.py does in the tripleo tempest element it 
should be an example of how to filter out failing tests: 
https://github.com/openstack/tripleo-image-elements/blob/master/elements/tempest/bin/run-tempest#L110





Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues when running unit tests in OpenStack

2014-04-08 Thread Mehdi Abaakouk

Hi,

Le 2014-04-08 17:11, Ben Nemec a écrit :

It should be possible to switch back to testr for the py33 tests if we
use the regex syntax for filtering out the tests that don't work in
py3.  The rpc example is
https://github.com/openstack/oslo-incubator/blob/master/tox.ini#L21
and if you look at what tests2skip.py does in the tripleo tempest
element it should be an example of how to filter out failing tests:
https://github.com/openstack/tripleo-image-elements/blob/master/elements/tempest/bin/run-tempest#L110


Unfortunately, it won't work because testr/subunit needs to load all 
python files to compute the tests list, and then filter it with the 
regexes
But in python3, all files cannot be loaded yet. This is why nosetests is 
used currently.


Regards,
---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Some Thoughts on Log Message ID Generation Blueprint

2014-04-08 Thread Ben Nemec

On 04/07/2014 11:57 PM, Peng Wu wrote:

Thanks for the comments.
Maybe we could just search the English log. :-)


Right, but the problem is that the English log is not guaranteed to 
remain the same.  An (extremely contrived) example:


Say we have a log message like "Failed to not find entity: %s"

That's really confusing because of the double negative.  So we change it 
to "Found an unexpected entity: %s".  It's highly unlikely that a search 
for the changed message is going to also turn up a blog post or whatever 
about the first message.


Granted the example is contrived, but I've seen just that sort of log 
message rewording done in real changes too.


Maybe the dual logging is enough for everyone, but I don't think it 
addresses all of the reasons for wanting message ids so I don't think we 
can just say we're not going to do message ids because of it.  It's 
perfectly valid to say we're not going to do message ids because no one 
wants them enough to actually implement them though. :-)




But I just find it is hard to meet all requirements of log message id,
Just some thought that we can avoid the message id generation by using
the English log.
For debug purpose, we can just read the English log.

Regards,
   Peng Wu

On Mon, 2014-04-07 at 11:19 -0500, Ben Nemec wrote:

On 04/03/2014 10:19 PM, Peng Wu wrote:

Hi,

Recently I read the "Separate translation domain for log messages"
blueprint[1], and I found that we can store both English Message Log and
Translated Message Log with some configurations.

I am an i18n Software Engineer, and we are thinking about "Add message
IDs for log messages" blueprint[2]. My thought is that if we can store
both English Message Log and Translated Message Log, we can skip the
need of Log Message ID Generation.

I also commented the "Add message IDs for log messages" blueprint[2].

If the servers always store English Log Messages, maybe we don't need
the "Add message IDs for log messages" blueprint[2] any more.

Feel free to comment this proposal.

Thanks,
Peng Wu

Refer URL:
[1]
https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain
[2] https://blueprints.launchpad.net/oslo/+spec/log-messages-id


As I recall, there were more reasons for log message ids than just i18n
issues.  There's also the fact that an error message might change
significantly from one release to another, but if it's still addressing
the same issue then the message id could be left alone so searching for
it would still return relevant results, regardless of the release.

That said, I don't know if anyone is actually working on the message id
blueprint so I'm not sure how much it matters at this point. :-)

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Infra][Ceilometer][MagnetoDB] HBase database in devstack

2014-04-08 Thread Ilya Sviridov
Hello infra and devstack,


I would like to start thread about adding of nosql databases support to
devstack for development and gating purposes.

Currently there is necessity of HBase and Cassandra in MagnetoDB project
for running tempest tests.

We have implemented Cassandra as part of MagnetoDB devstack integration (
https://github.com/stackforge/magnetodb/tree/master/contrib/devstack) and
started working on HBase now (
https://blueprints.launchpad.net/magnetodb/+spec/devstack-add-hbase).

>From other side, HBase and Cassandra are supported as database backends in
Ceilometer and it can be useful for development and gating to have it in
devstack.

So, it looks like common task for both projects and eventually will be
integrated to devstack, so I'm suggesting to start that discussion in order
push ahead with it.

Cassandra and HBase are both Java applications, so come with JDK as
dependency. It is proved we can use OpenJDK available in debian repos.

The database itself are distributed in two ways:

- as debian packages build and hosted by software vendors
 HBase deb http://public-repo-1.hortonworks.com/HDP/ubuntu12/2.x HDP
main
 Cassandra deb http://debian.datastax.com/community  stable main
- as tar.gz hosted on Apache Download Mirrors
 HBase  http://www.apache.org/dyn/closer.cgi/hbase/
 Cassandra http://www.apache.org/dyn/closer.cgi/cassandra/

The distributions provided by Apache Foundation looks more reliable, but I
heard, that third party sources can be not stable enough to introduce them
as dependencies in devstack gating.

I have registered BP in devstack project about adding HBase
https://blueprints.launchpad.net/devstack/+spec/add-hbase-to-devstack and
we have started working on it.

Please share your thoughts about it to help make it real.
Thank you.


Have a nice day,
Ilya Sviridov
isviridov @ FreeNode
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] March core team review

2014-04-08 Thread Devananda van der Veen
As March has come to a close, and Juno is open for development, I would
like to look at our review stats and see if the core review team should be
adjusted to reflect current activity. Also, since I believe that our
development pace needs to accelerate, I would like to increase the size of
the team from its current size of six.

As a quick outline of what "core" means within this team: in my view, it's
a combination of how active and effective someone's reviews are, and how
much they participate in relevant discussions. Ideally, cores should aim
for about two reviews per work day, or about 40 per month, but it's not a
hard limit, and I don't believe we should remove folks simply because their
review stats slip below this line if their input is still felt and valued
within the project.

With ~160 reviews submitted in the last month, in an ideal situation, we
would be able to keep up with submissions if we had a review team size of 8
(since it takes a minimum of two cores to land a patch). Given that reviews
have been taking an average of 2.8 patch sets, we would actually fall
behind if reviewers didn't do more than 2/day - and for a large part of
Icehouse, we were pretty far behind... For this reason, reviews by non-core
members are extremely helpful because they often catch issues early and
allow core reviewers to focus on patches that have already received some
+1's.

Here are the current 90-day stats, cut at the point where folks are meeting
the suggested quantity of reviews.

http://russellbryant.net/openstack-stats/ironic-reviewers-90.txt

++---++
|  Reviewer  | Reviews   -2  -1  +1  +2  +A+/- % |
Disagreements* |
++---++
|devananda **| 358   22 101   7 228 13265.6% |   10 (
2.8%)  |
|   lucasagomes **   | 3164  99   4 209  6167.4% |8 (
2.5%)  |
|nobodycam **| 1990  24   0 175  8187.9% |   11 (
5.5%)  |
|rloo| 1800  74 106   0   058.9% |8 (
4.4%)  |
|   whaom| 1530  56  97   0   063.4% |   15 (
9.8%)  |
|   yuriyz   | 1360  57  79   0   058.1% |   14 (
10.3%)  |
|max_lobur **| 1311  44  38  48   465.6% |7 (
5.3%)  |

So, I'd like to formally propose that Ruby (rloo), Haomeng (whaom), and
Yuriy (yuriyz) be added to the core team at this time. I believe they have
all been very helpful over the last few months.

Regards,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] Meeting Tuesday April 8 - 1900 UTC

2014-04-08 Thread Ed Leafe
On Apr 7, 2014, at 7:01 PM, Brian Curtin  wrote:

> https://wiki.openstack.org/wiki/Meetings#python-openstacksdk_Meeting
> 
> Date/Time: Tuesday 25 March - 1900 UTC / 1400 CDT

I don't have a time machine, so let's do this today!

Tuesday, 8 April - 1900 UTC

> IRC channel: #openstack-meeting-3
> 
> About the project:
>   https://wiki.openstack.org/wiki/SDK-Development/PythonOpenStackSDK
> 
> If you have questions, all of us lurk in #openstack-sdks on freenode!

I've updated the agenda: 

https://wiki.openstack.org/wiki/Meetings/PythonOpenStackSDK#Agenda_for_2014-04-08_1900_UTC


-- Ed Leafe





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] March core team review

2014-04-08 Thread Lucas Alvares Gomes
>
> So, I'd like to formally propose that Ruby (rloo), Haomeng (whaom), and
> Yuriy (yuriyz) be added to the core team at this time. I believe they have
> all been very helpful over the last few months.
>
+1 for all! Good stuff :)

Cheers,
Lucas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues when running unit tests in OpenStack

2014-04-08 Thread Doug Hellmann
On Tue, Apr 8, 2014 at 11:21 AM, Mehdi Abaakouk  wrote:
> Hi,
>
> Le 2014-04-08 17:11, Ben Nemec a écrit :
>
>> It should be possible to switch back to testr for the py33 tests if we
>> use the regex syntax for filtering out the tests that don't work in
>> py3.  The rpc example is
>> https://github.com/openstack/oslo-incubator/blob/master/tox.ini#L21
>> and if you look at what tests2skip.py does in the tripleo tempest
>> element it should be an example of how to filter out failing tests:
>>
>> https://github.com/openstack/tripleo-image-elements/blob/master/elements/tempest/bin/run-tempest#L110
>
>
> Unfortunately, it won't work because testr/subunit needs to load all python
> files to compute the tests list, and then filter it with the regexes
> But in python3, all files cannot be loaded yet. This is why nosetests is
> used currently.

We can remove the rpc code from the incubator after Juno, but until
then we will need to work around this (either in the incubator or by
waiting until the modules move to their own repositories as part of
graduating to create new libraries).

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] March core team review

2014-04-08 Thread Roman Prykhodchenko
+1 for those guys.


On Tue, Apr 8, 2014 at 6:35 PM, Lucas Alvares Gomes
wrote:

> So, I'd like to formally propose that Ruby (rloo), Haomeng (whaom), and
>> Yuriy (yuriyz) be added to the core team at this time. I believe they have
>> all been very helpful over the last few months.
>>
> +1 for all! Good stuff :)
>
> Cheers,
> Lucas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues when running unit tests in OpenStack

2014-04-08 Thread Ben Nemec

On 04/08/2014 10:21 AM, Mehdi Abaakouk wrote:

Hi,

Le 2014-04-08 17:11, Ben Nemec a écrit :

It should be possible to switch back to testr for the py33 tests if we
use the regex syntax for filtering out the tests that don't work in
py3.  The rpc example is
https://github.com/openstack/oslo-incubator/blob/master/tox.ini#L21
and if you look at what tests2skip.py does in the tripleo tempest
element it should be an example of how to filter out failing tests:
https://github.com/openstack/tripleo-image-elements/blob/master/elements/tempest/bin/run-tempest#L110


Unfortunately, it won't work because testr/subunit needs to load all
python files to compute the tests list, and then filter it with the
regexes
But in python3, all files cannot be loaded yet. This is why nosetests is
used currently.


Oh, right.  I figured there was probably a reason I didn't object to 
using nose in the first place, and that would be it.  Thanks for the 
reminder. :-)


-Ben


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Neutron] Networking Discussions last week

2014-04-08 Thread Salvatore Orlando
Hi Mike,

For all neutron-related fuel developments please feel free to reach to to
the neutron team for any help you might need either by using the ML or
pinging people in #openstack-neutron.
Regarding the fuel blueprints you linked in your first post, I am looking
in particular at
https://blueprints.launchpad.net/fuel/+spec/separate-public-floating

I am not entirely sure of what are the semantics of 'public' and 'floating'
here, but I was wondering if this would be achievable at all with the
current neutron API, since within a subnet CIDR there's no 'qualitative'
distinction of allocations pools; so it would not be possible to have a
'public' IP pool and a 'floating' IP pool in the same L3 segment.

Also, regarding nova gaps, it might be worth noting that Mark McClain
(markmcclain) and Brent Eagles (beagles) are keeping track of current
feature/testing/quality gaps and also covering progress for the relevant
work items.

Regards,
Salvatore


On 8 April 2014 14:46, Mike Scherbakov  wrote:

> Great, thanks Assaf.
>
> I will keep following it. I've added a link to this bp on this page:
> https://wiki.openstack.org/wiki/NovaNeutronGapHighlights#Multi-Host,
> might help people to get the status.
>
>
> On Mon, Apr 7, 2014 at 11:37 AM, Assaf Muller  wrote:
>
>>
>>
>> - Original Message -
>> > Hi all,
>> > we had a number of discussions last week in Moscow, with participation
>> of
>> > guys from Russia, Ukraine and Poland.
>> > That was a great time!! Thanks everyone who participated.
>> >
>> > Special thanks to Przemek for great preparations, including the
>> following:
>> >
>> https://docs.google.com/a/mirantis.com/presentation/d/115vCujjWoQ0cLKgVclV59_y1sLDhn2zwjxEDmLYsTzI/edit#slide=id.p
>> >
>> > I've searched over blueprints which require update after meetings:
>> > https://blueprints.launchpad.net/fuel/+spec/multiple-cluster-networks
>> > https://blueprints.launchpad.net/fuel/+spec/fuel-multiple-l3-agents
>> > https://blueprints.launchpad.net/fuel/+spec/fuel-storage-networks
>> > https://blueprints.launchpad.net/fuel/+spec/separate-public-floating
>> > https://blueprints.launchpad.net/fuel/+spec/advanced-networking
>> >
>> > We will need to create one for UI.
>> >
>> > Neutron blueprints which are in the interest of large and thus complex
>> > deployments, with the requirements of scalability and high availability:
>> > https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
>> > https://blueprints.launchpad.net/neutron/+spec/quantum-multihost
>> >
>> > The last one was rejected... there is might be another way of achieving
>> same
>> > use cases? Use case, I think, was explained in great details here:
>> > https://wiki.openstack.org/wiki/NovaNeutronGapHighlights
>> > Any thoughts on this?
>> >
>>
>> https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr
>> This is the up the date blueprint, called "Distributed virtual
>> router", or DVR. It's in early implementation reviews and is
>> targeted for the Juno release.
>>
>> > Thanks,
>> > --
>> > Mike Scherbakov
>> > #mihgen
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Mike Scherbakov
> #mihgen
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] heat is not present in keystone service-list

2014-04-08 Thread Steven Dake

On 04/08/2014 07:00 AM, Peeyush Gupta wrote:

Hi all,

I have been trying to install heat with devstack. As shown here 
http://docs.openstack.org/developer/heat/getting_started/on_devstack.html


I added the IMAGE_URLS to the locarc file. Then I ran unstack.sh and 
then stack.sh. Now, when I run heat stack-list, I get the following error:


$ heat stack-list
publicURL endpoint for orchestration not found

I found that some people got this error because of wrong endpoint in 
keystone service-list, but in my output there is no heat!


My guess is your devstack is older to the point of not having heat 
enabled by default.  You can add the following to your localrc:


# heat
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
IMAGE_URLS+=",http://fedorapeople.org/groups/heat/prebuilt-jeos-images/F18-x86_6
4-cfntools.qcow2,http://fedorapeople.org/groups/heat/prebuilt-jeos-images/F18-i3
86-cfntools.qcow2,http://fedorapeople.org/groups/heat/prebuilt-jeos-images/F19-i
386-cfntools.qcow2,http://fedorapeople.org/groups/heat/prebuilt-jeos-images/F19-
x86_64-cfntools.qcow2,http://download.fedoraproject.org/pub/fedora/linux/release
s/20/Images/x86_64/Fedora-x86_64-20-20131211.1-sda.qcow2"



$ keystone service-list
+--+--+---+---+
|id|   name   |  type   |   
 description|

+--+--+---+---+
| 808b93d2008c48f69d42ae7555c27b6f |  cinder  | volume  |   Cinder 
Volume Service   |
| f57c596db43443d7975d890d9f0f4941 | cinderv2 |  volumev2 | 
 Cinder Volume Service V2 |
| d8567205287a4072a489a89959801629 |   ec2|  ec2|  EC2 
Compatibility Layer  |
| 9064dc9d626045179887186d0b3647d0 |  glance  | image   |   
 Glance Image Service   |
| 70cf29f8ceed48d0a39ba7e29481636d | keystone |  identity | 
Keystone Identity Service |
| b6cca1393f814637bbb8f95f658ff70a |   nova   |  compute  |   
 Nova Compute Service   |
| 0af6de1208a14d259006f86000d33f0d |  novav3  | computev3 |  Nova 
Compute Service V3  |
| b170b6b212ae4843b3a6987c546bc640 |s3| s3|   
  S3|

+--+--+---+---+

Please help me resolve this error.
Thanks,
~Peeyush Gupta


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >